playlist stringclasses 160
values | file_name stringlengths 9 102 | content stringlengths 29 329k |
|---|---|---|
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_23_Historical_Linguistics.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: All right, so one of the things we've learned in this class is that language is complicated. At least, maybe you already knew that, but we've spent a semester looking at various ways in which language is more complicated than you might have-- than you might have realized before you came in here and in ways that maybe you hadn't thought about much. These are a few of the things we've talked about, various special properties of language, fancy things that language does that we've spent the semester puzzling over. And if language has all of these complicated properties, the recurring question, which I've raised a couple of times, is how do we acquire this? We're going to talk seriously about language acquisition sometime next week. But a provisional answer to this that people have offered and taken seriously is that at least some of this stuff is innate, in the sense that-- I think I've said it this way before-- part of being a human being is having the kind of mind that constructs language in certain ways, but not others. So if we ask questions like, why do languages have binary branching? Why not ternary branching? The answer is, well, we're not equipped with ternary branching. The human brain does it this way and not some other way. And that leads to lots of other interesting questions, like why? What is it about human minds that are set up this way? Can we get that to follow from anything else, any deeper conditions? But that's a claim that's out there, which we have referred to a couple of times. We don't start with a blank slate. We start with a rich body of linguistic knowledge, and learning our actual language is a matter of filling in some details. But it's very clear that not everything is innate. For example, here's a cartoon. It has a cat in it. And the fact that animal is pronounced "cat" is an accident. It could have been something else. So Saussure famously pointed this out and called it the arbitrariness of the sign, that, for the most part, what word the language has for a given thing has to do with the phonetic inventory of the language and other properties of the language, but the English word "cat" refers to cats, but that's kind of a historical accident. It could have referred to dogs or mice or anything else. Just happens to refer to cats. And the fact is that kids, if you're ever around small kids, you'll hear them trying out theories about what words mean and being wrong. I was once around a small child who used the word "moon" for every light. So the hall light was a moon, and the moon was a moon, and the sun was a moon. Everything was a moon. And sometimes these mistakes survive to adulthood. So take the word "livid." What do you think of the word "livid" as meaning, if you're a native speaker of English or even if you're not? Very angry. That's what it often means. It's originally a color term. It meant white. Because there was an idea that when people got really angry, their faces turned white. They were white with rage. You sometimes see the expression, his face went livid. When I was a kid, I thought that meant his face was red because I think of people who are angry as being red. But originally, that's what the word meant. It meant white. And then yes, I think for most of us, now it means angry. Or I used to believe in the verb "misle" until-- I'm embarrassed to say when I stopped believing in the word "misle." I was an adult. Did anybody else have the verb "misle"? Is there anybody who right now is thinking, what are you talking about? Of course, there's a verb "misle." I believed in the verb "misle" because I had read it in expressions like this. I had been "misle-d" was what I believed. It wasn't-- I think-- I don't believe I actually used the word in conversation and had somebody correct me. I think this was, like, I was in my 20s or something when I sat down and was like, wait a minute. So stuff like this happens. And sometimes these mistakes catch on, and people-- and sort of survive into adulthood. As I say, I was in my 20s, and I still believed that there was a verb "misle." If I were a king or something, then if my subjects tried to convince me that there wasn't a verb "misle," I would just have them beheaded. And before we knew it, there would be a verb "misle" in English. Or if I were somebody else who was hugely, hugely influential, like a linguistics professor or something. Wait a minute. There are also various forms of semantic drift that languages undergo, where words change what they refer to. "Livid" is maybe an example of that. It used to mean white, and now it means angry. Here's another example. The modern English word "bead" originally meant a prayer. So there's still-- there's a German verb "beten." That's the verb "to pray." It's related to the English word "bead"-- in Old English, the ancestor of the word "bead" meant a prayer. The switch from prayer to bead, you know, small, round object, came via rosaries, which are devices for counting prayers in certain religious traditions, including Catholicism and others. So if you're in a religious tradition where it's important to say a certain number of prayers, you have this little device that has beads on it. And you're counting the prayers using this device. So if you're asked, what are you doing? Well, you're counting prayers. But there's another sense in which you're counting, well, beads, like small, round objects on a wire. And this is thought to be how the word shifted its meaning. Or there are other things like this, words widen or narrow their meanings. In Old English, "steorfan" meant "to die." It's cognate with German "sterben." It's the ancestor of modern English "starve," which went through a stage of-- sort of originally meant "die." It went through a stage of meaning specifically "die from hunger." And today, I think of it as basically just meaning, be very hungry. So you can still say "He starved," and have that mean he died, I guess. But I'm a lot more likely to use it in an expression like "I'm starving. Let's have lunch." Maybe when I say that, I literally mean I'm dying of hunger, or figuratively I mean I'm dying of hunger. Or there's a French word "negre" which means a Black man. It has a descendant in modern Haitian Creole, "neg," which just means man. So the meaning of that has broadened for reasons that are not all that hard to understand. So Haitian Creole speakers, when they refer to men, they're often referring to Black men, since Haitian Creole speakers tend to be Black. So yeah, in Haitian Creole, I am a "neg," although in French, I am not a "negre." Or similar specialization of a term, there's the old English word "chniht," which meant a servant, and has a cognate in modern German, "knecht," which means something like a boy or a servant. It's the ancestor of the modern English word "knight," who was originally specifically a servant of the king. The reason that the English word "knight" has all of those unpronounced letters in it is that those are there in an attempt to pronounce the word or spell the word as it was once pronounced. We spell that with a K at the beginning because it used to be pronounced "cniht." There used to be a K there. And then English got out of the habit of pronouncing K before N, and so we have words like "knight" and "knee" where the K is still there as a memorial to the times when there was a stop there at the beginning. Same deal with the G-H, which spelled a [HISSES IN BACK OF THROAT] sound in the middle. Or this is one of my favorites. The old English word "housewife," "huswif," which is the ancestor of the modern English word "hussy." That's undergone some semantic change over the years, as it's gone down through the years. There's a disturbing tendency for words that refer to women to undergo pejoration, that is, to become worse in their meaning. There's possibly a corresponding tendency. If you look at lots of languages, there are lots of languages in which the modern word for "man" is straightforwardly related to earlier words for "man," but in which the word for "woman" has been replaced by some other word for "woman." And I assume it's related to this tendency, that words for women tend to become insulting over time in lots of different cultures, and so you need to replace them with something. Maybe relatedly, it's not uncommon for modern words for women to originally refer to queens or noblewomen. That's true, for example, both in German and in Zulu. The modern Zulu word for "woman" used to mean "queen." It replaced an older word for a woman that became insulting. Disturbing cross-linguistically fairly common tendency. Or-- this is my actual favorite in this list because I wonder about the confusion that it must have caused-- so there's a Proto-Austronesian word reconstructed as "wada," which was an existential construction. It meant "there is." It has a nice, straightforward descendant in Tagalog, "wala," which means "there is not." I don't know how that happened. There's an Arabic word, "wala," which means something like "no." If you want to say there are no books, you use "wala" to say that. And it's possible that Tagalog got influenced by Arabic. It was in contact with Arabic. It has Arabic borrowings. So maybe that's got something to do with it. But yes, there's presumably a fairly weird period in the middle there, where people were having existential crises. I just used a star. You guys are used, by now, to seeing stars on sentences that are ungrammatical, and that is how stars are used in syntax. But when people are doing historical linguistics, stars mean something else. They mean, this is a word that we don't actually see in any documents, or you can't hear it. It's our theory about an ancestral form. So Proto-Austronesian is the ancestor of a bunch of languages of the Pacific, mostly. So the languages of the Philippines and Indonesia and New Zealand and Hawaii. It's this gigantic geographic area that they went over. Plus some who got lost and ended up in Madagascar. There's an Austronesian language over there, too, Malagasy. The ancestor of all of those languages, it's called Proto-Austronesian. And that form that I just showed you for "there is" is a reconstruction of what they think that word would have been in Proto-Austronesian. We haven't met any Proto-Austronesians. They didn't leave us any documents. It's a hypothesized underlying form, given all the forms that we can see. That's what star means in historical linguistics. So various kinds of semantic drift. Recuttings-- so the example of "misle" is an example of that. I saw the word "misled," and I thought that it was the past tense or the participle of a verb, "to misle." There are other lots of examples of this with the indefinite article in English. So the indefinite article in English has two forms. It's "an" before a vowel, and it's "a" before a consonant. So we say "an apple," but we say "a banana." Yes, good. I always worry about when to stop spelling banana. It's not easy. A consequence of that is that if you hear a sequence like-- this is one of the examples for which this happened-- you hear a sequence like "a nickname," it's sort of hard to know whether you're hearing "a nickname" or "an ickname." So "nickname" was originally "ickname," "ekename." "Eke" is an old word for "also." It has a cognate in German, which is "auch." It shows up in The Canterbury Tales, in the prologue to The Canterbury Tales. If you ever read the prologue to The Canterbury Tales, Chaucer says something like "and Zephyrus eke, with his swete brethe"-- "And Zephyr also with his sweet breath." So that's what that "eke" is. It's an "also name," that is, a name that you have in addition to your other name. If ordinary sound changes had taken place, the word for "nickname" would be "ickname," but it's not, because of confusion about where the N went, basically. There are a bunch of words like this in English. This is another example. Middle English had a word "pease" which was a mass noun. By mass noun, I mean it referred to a certain kind of stuff that didn't have a plural. We have a bunch of words like this, words that don't ordinarily have plurals. Words like "water" or "ketchup" or "flour," where if you have a barrel of flour, you don't talk about it as a barrel of 16 million "flours." That's not what you're talking about. You're talking about a substance, right? A stuff. Same deal with ketchup or water. You can talk about different kinds of flours being different flours. I guess if you were really, really into ketchup, you could do the same thing with ketchup, right? I'm fond of the ketchups that you get in Southeast Asia, or whatever. "Pease" used to be a mass noun like that. It was just a name for this green stuff that consists of lots of little round things. But because this particular mass, well, can be subdivided into these little individual, round, green things, and because the word "pease" happened to end in a Z sound so it looked like a plural, it got reanalyzed as a plural. And so they made up a singular "pea," which didn't exist before. English has done a lot of things like this, sometimes called back- formation, where you take a word that looks polymorphemic, although it's not, and pretend that it is. Another one, which I don't think I have anywhere in here, is we had for a long time a word, "beggar." That noun was in the language for a long time. It ends in a sequence that's pronounced schwa R-- I've lost my ability to write the IPA symbol for R-- which sounds like our agentive suffix, the "-er" that's at the end of "teacher." And so people made up a verb, "beg." But it wasn't originally a polymorphemic word. That's one reason it's not spelled "begger," which is what you would have expected if you were adding the teacher suffix to "beg." It's spelled like this because, well, this is not the "-er" suffix. It's just pronounced like it. So the verb "to beg" got back-formed. Another similar kind of recutting. Here are three Old English words. The positive form, "near," which is "neah." Then that has a comparative form, "nearer," which was "nearra." And then there was a superlative form, "nearest," which was "neahsta." All three of those words have descendants in modern English. "neah" is the ancestor of "nigh," which we sort of still have in modern English, as a sort of archaic-sounding word for "near." Don't use it so much anymore. The comparative, the word that meant "nearer," turned into "near" in modern English. And "neahsta," the superlative, the nearest, is the ancestor of the modern English word "next." Regular sound change applied to "nearra" to give you "near" and obscured the fact that it was a comparative. So comparatives normally end in a schwa-R sequence. And because of regular sound change, basically you were adding the comparative suffix to something that ended in a vowel and then an H. The consequence was that you couldn't hear anymore that it was a comparative, so it got reanalyzed as something that was not a comparative. So it's now our word for "near." It's more or less replaced "nigh" as our word for "near." And then we built new comparatives and superlatives on that. It was originally comparative. All kinds of entertaining things happen to languages, sometimes as a matter of somebody like me goes through life a fairly long time believing in a verb, "to misle." And if I had a more forceful personality and more political clout, then I might have been able to force other people to believe in the verb "to misle" as well. Some of this presumably also is a matter of not misunderstandings, but just people deliberately being weird. I mean, if you think of things that people do with language today, they have done this all their lives, pretending that words mean things other than what they mean just for fun, as a way of having a good time or confusing your parents or whatever else. The other big kind of sound change, though, that I mainly want to talk about today are what are called sound changes. Here are some numbers from various languages of Europe. Are they all Europe? Yeah, mostly. Yeah. Yeah. Europe, widely-- well, Sanskrit. That's not in Europe. So Sanskrit, Greek, Latin, Gothic, which is an ancestor of the Germanic. No, it's not. It's an old Germanic language. It was spoken around the Crimea. Old Irish, Lithuanian, Old Church Slavonic, which is an ancestor of the Slavic languages. Basque, and Turkish. If you look at these languages, I've highlighted the second line because it's the line in which this is clearest. You can see a lot of these languages have words for the number two that look kind of similar to each other except for the last two, which really, really don't. So you've got words that have a "d" and then a "u," or OK, Gothic has a "t," the voiceless version of a "d." And then sometimes there's a "w." Old Irish lost out on the "w." And then you get to Basque and Turkish, where there's no "d" and no "t" and nothing rounded at all. Yeah. AUDIENCE: What does the colon mean? NORVIN RICHARDS: Oh, those indicate long vowels. Sorry. Good question. I should have said that. Yeah. Actually for Latin, I should have used a macron. I should go in there and fix that. So a hypothesis that people could entertain, then-- and this was famously entertained for the first time a few centuries ago-- was that it's not just the case that these various European languages have similar sounding words for one, two, and three, but that these words are similar sounding because they're related to each other. The word for this is cognates. These are all descendents of a common ancestor. And that Basque and Turkish are not part of this group. They don't have words that are cognate with those. I don't know why I specifically boxed the words for three. The claim is that the words for one and two and three in most of those languages in that table are cognates with each other, and that Basque and Turkish are the ones that are left out. So I just said, hey, look, these words look kind of similar to each other. And that is indeed often how this kind of thing starts. You look at two languages and say, hmm, these languages look kind of similar. I wonder if they're related. But big discovery from the early days of systematic linguistics was the observation that it's possible to be more systematic than this. So it isn't just a matter of saying, oh, look. These languages look similar to each other. We can actually state laws, generalizations, about which sounds correspond to which other sounds. So we can do better than just saying, these languages look similar. We get to say things like, for example, where Latin and Greek have a "d," English has a "t," for a certain set of words. So the Latin and Greek words for "two" and "eat" and "ten" have "d"s in them, which correspond in English to "t"s. Or the words in Latin and Greek that are related to "kin" and that second word means field. They have a "k" as corresponding "k" sound in English. These words that have a "b" in them-- words that have a "b" in them are weirdly rare in Proto-Indo-European, the ancestor of all of these languages. There are a couple of examples here. "Cannabis," the second line there, is thought to probably be a borrowing into Proto-Indo-European from some other language for several reasons. One is that it has a "b" in it, and that's rare in Proto-Indo-European. Another is that it's weirdly long. Proto-Indo-European words are mostly not three syllables long. So a hypothesis is that the ancestors of the Latins and the Romans and the Greeks and the Sanskrit speakers and the Gothic speakers, the Proto-Indo-Europeans, as they were roaming around Europe conquering various people, encountered some people who had discovered cannabis and were like, cool. We will take that. And while we're at it, we will borrow your word for it. Apparently, something like that happened. So there's this generalization-- it's called Grimm's Law because it was discovered by a Dane named Rasmus Rask and then discovered again by Jacob Grimm, who was one of the Grimm brothers of the Grimm Fairy Tales. This observation about systematic sound correspondences between the oldest versions of these related languages. You can see with the benefit of all of the phonology that we did at the beginning of this class that what's going on is that where Latin and Greek have voiced stops, English and other Germanic languages tend to have voiceless stops. So there was a general sound change. And there's more to Grimm's Law than that, but that's one of the observations that the Grimm made. Once we've looked at a bunch of languages and figured out all the sound laws that we need, we sort of get to posit these underlying forms. I was talking about this for Proto-Austronesian. These, you can think of them as being like the underlying forms we talked about when we did phonology, right? We posit these sound changes, and we have a sort of point of origin for all of these words. It's the form that undergoes different sound changes in different languages with the consequence that you have different languages. That underlying form, the language that has those underlying forms, is sometimes called the proto language. So I've been talking about Proto-Indo-European and Proto-Austronesian. Here's one of the examples of Grimm's Law at work. We look at Sanskrit and English and Latin. I don't know why I said them in that order. And we convince ourselves that where Sanskrit and Latin have a "d," English has a "t." That's part of Grimm's Law, so it has the consequence that voiced sounds in Sanskrit and Latin correlate with voiceless stops in English. If we look at the vowels, so we have an "a" in Sanskrit and an "e" in Latin. And if we look more widely at correlating words, words that look like they might be related in Sanskrit and Latin-- here are a bunch of examples. What kind of a sound do you think we should posit as the proto sound, a proto vowel for "eat" in both Sanskrit and Latin? Have a look at those data and tell me what you think. AUDIENCE: [SNEEZE] NORVIN RICHARDS: Bless you. Is there a generalization that we can draw of the form wherever Sanskrit has an "a," Latin has an "e"? That's true for "eat" and "tooth." And for the first one in "field." But we're seeing that for "sheep" and "two" and for the second vowel in "field," there are other Latin vowels that correspond with Sanskrit vowels. So what type of a vowel should we posited as the original vowel in "eat"? Can we predict the Sanskrit vowels-- or sorry, the Latin vowels from the Sanskrit vowels? No. Right? There aren't generalizations like, wherever Sanskrit has an "a," Latin as an "e." What should we do? Yeah. AUDIENCE: So we [INAUDIBLE]. NORVIN RICHARDS: Yeah. So if we do it the other way around, if we say the original vowels were more like the Latin vowels, and Sanskrit underwent a bunch of sound changes that squashed many of the vowels together. So the original vowels "e" and "o" and also "a," like the first vowel in "field," all three of those vowels became "a," "ah," in Sanskrit. This is the same kind of reasoning we were using when we were originally doing phonology. We can't predict the Latin vowels from the Sanskrit vowels, but we can predict the Sanskrit vowels from the Latin vowels. Namely, the Latin vowels all got squashed together. So Latin has kept an older vowel system that Sanskrit has simplified to a certain extent. So we'll posit a Proto-Indo-European stem, "ed-" for "eat." Sanskrit undergoes a change, where the "eh" becomes "ah." English undergoes Grimm's Law and also some changes with the vowels, which we'll get a chance to talk about in the second. And Latin preserves at least that much of the Proto-Indo-European form. It's worth being careful. Just like at the beginning of this class when we were doing phonology, we started off with sound changes, looking at data sets, where it was often the case that you could posit an underlying form that was the same as some form that you could see on the surface. But I think we saw, back when we were doing phonology, that wasn't necessarily true. There were cases where it was useful to posit an underlying form that always underwent some kind of change. It never just surfaced in its actual form. And this also happens when we're positing proto forms. It hasn't happened in this particular case, so this-- I carefully picked a case where life was simple, and Latin has just kept the Proto-Indo-European form. But we can't rely on that happening every time. Sometimes there are cases where we'll want a positive proto form that just doesn't survive in its unchanged form in any case. Sound changes. The study of sound changes-- first of all, the discovery that sound changes were regular, that you could make statements of the form wherever this language has sound X, this other language has sound Y, that was a big early discovery in linguistics and the starting point for a lot of the work that started getting done in phonology as people started getting interested in the questions of what kinds of sound changes there were and why some kinds of sound changes were more common than others and what motivated them and why they happened and so on. Similarly, the discussion that we had when we were doing phonology rule ordering, where we decided that it was useful sometimes to posit-- we talked about this for Lardil, for example-- that it's useful sometimes to posit an original form, an underlying form, and have it undergo sound changes in a particular order. I think we talked about it as being like an assembly line, right? It first goes to the place where it undergoes this sound change, then it undergoes the sound change, with the consequence sometimes that the second sound change creates something that would undergo the first sound change, but it's too late. We looked at examples like that. Way back when we were talking about Lardil, we had a rule that changed "u" to "a" at the ends of words that changed underlying "muwu," which is the underlying word for "water," to "muga." And we had another sound change that got rid of "k' at the ends of words. So if you had a "k" at the end of the word, it would drop, which changed underlying "naluk," which is the word for "story," to "nalu," But it was important that these rules applied in this order because "nalu" doesn't then change to "nala." So that was one of the kinds of cases we were talking about when we were talking about ordering rules in a particular order. All of that talk started in work on historical linguistics, where it's easy to imagine, in fact, that a bunch of sound changes have taken place, and they've taken place in a particular order. And people got interested in possible relations between sound changes. Some kinds of sound changes are cross linguistically fairly common. So here's one. "w" changed to "gwa," for example, in Chamorro. Chamorro is a language of Saipan in the Pacific. It's closely related to Tagalog, which is a language of the Philippines. They're both Austronesian languages. So for example, Tagalog has a word "asawa," which means "spouse." So Tagalog kinship terminology mostly doesn't indicate the gender of the person you're talking about. So they don't have words for husband or wife. They just have this word "asawa," which means spouse. You can say male spouse or female spouse, but you don't, usually. So Tagalog has a word, "asawa." In Chamorro, this is "asagwa." So the "w" has become a "gwa." Tagalog has a number, "dalawa," which is the number two. That corresponds to "hugwa" in Chamorro, and this "wala," which we were talking about before, which means "there isn't" just in Tagalog. All over Austronesian means "there is," and that's what it means in Chamorro. But it doesn't start with the "w" in Chamorro, it starts with "gwa." If you think about what a "w" is, the fact that a "w" becomes a "gwa" in Chamorro is maybe not such a weird fact. So think about what a "w" is. A "w" is a glide that corresponds to the vowel "u." If you think back, way, way back, months ago, when you were young and carefree and knew a lot about phonology and had not yet encountered syntax and semantics, maybe you remember that the value "u" has rounding, right? Your lips are rounding. But it's also high and back. What that means is that your tongue is bunched up toward the top of the back of your mouth. So back then, we were doing group exercises like saying "oo-wee" over and over again. And when you did that, you discovered that your lips were rounding and unrounding, but also that your tongue was shifting from the back of your mouth to the front of your mouth. So "w" is a glide that corresponds to the vowel "u." It's a version of the vowel "u" said very fast, basically, in which your tongue is bunching up toward the back of your mouth, reaching toward the top of the back of your mouth, and your lips are rounding. So if you think about what you do when you do a "g," a "guh," a voiced velar stop, well what you're doing is your tongue is touching the back of your mouth, right? It's making a velar closure back there. So the fact that a "wa" has become a "gwa" in this language is sort of understandable. A "gwa" is just a "w" that got out of hand, right? It's your tongue not just bunching up toward the back of your mouth, but actually colliding with it, making a closure back there. And I'm going through all this because this sound change didn't just happen in Chamorro. It also happened in Welsh. So Welsh has a bunch of words. I've just given you one, which is "man." The Welsh word for "man" is "gwir." It's related to the Latin word "vir," [pronounced "weer"], which we have as "virile" and English words like that. The "vir" is actually also, I think, related to the "were" in "werewolf," which is a man, a man-wolf. Welsh has another word which they borrowed from-- borrowed from Latin, "gwin," which is "wine." So they borrowed that from Latin. They changed the "w" sound at the beginning to a "gwa." So Latin underwent-- Welsh, sorry, underwent the same sound change. There are also some words that got borrowed from Proto-Germanic into late Latin, that is, Latin right before it was about to break up and turn into all the Romance languages, in which the Proto-Germanic "w" got realized in late Latin as a "gwa." So the Proto-Germanic word for "war," which was "werra"-- it's an ancestor of the modern English, "war"-- got borrowed into late Latin as "gwerra." And you now see that in English words like guerrilla, in the sense of a person who fights a particular type of warfare. If you study any Romance languages, the old Latin word for "war" was "bellum." But that word, I believe, vanished from all the Romance languages. What they all have are descendants of this word. So if you study Italian or French or Spanish, you need to learn words for war like "guerre" and "guerra." Those are all descendants of this Germanic word. So the late Latins, I guess, were really impressed with the Germanic attitude toward war. They borrowed this word, and they completely replaced their word for "war" with it, as far as I can tell. AUDIENCE: Do we know why "bellum" disappeared? NORVIN RICHARDS: No. At least, I don't. Yeah. This just became the cool word for war. So I mean, we have it in expressions that we've borrowed from Latin, like "antebellum." But as far as their normal word for war, it got replaced with this. Or we also have words in English where we have both of these words. So there's the Proto-Germanic word for "guard," which was "ward." We have that as a word, "ward," to ward off a threat or something like that. It got borrowed into late Latin as "gward." That's the modern word for "watch" in lots of languages. So in Italian, if you want to watch TV, the verb you use is "guardare." You guard the TV. You watch it carefully. And English has both of these words now, so we have "ward" and we also have "guard." They both have the same root. They're both from Proto-Germanic "ward," which came to us straight in the form of "ward," and then took a little detour into Latin to become "guard." And then we borrowed it again in a different form. Big discovery, then. Grimm's Law, this change from "w" to "gwa," so lots of big discoveries. But one of the big discoveries was that sound change was regular, that you could do better than just saying these words look kind of similar to each other. I bet these languages are related. You could actually say, oh, yeah. These words are-- these languages are related because there is this general law about how sounds in language X are related to sounds in language Y. This was called the Neogrammarian Hypothesis, and it was one of the big, early discoveries. It shifts emphasis away from looking at lists of words that look kind of similar. What we're looking for are lists of words that can be related by regular sound change laws, which is not the same thing. There could be lots of sound changes. The sound changes could be really radical. The result could be that the words don't look like each other at all. The point is to discover that there are regular correspondences between the words and the lists. Does that make sense? That's how you discover that two languages are related. So just to give you a dramatic example of this, here are three words for the number two in three languages whose names I have cunningly hidden from you. We'll call them language A, language B, and language C. And I will tell you that two of these words are related to each other and the third is not. You're probably suspecting a trick. But go ahead, somebody fall for it. Which of these two languages do you think are related to each other? Yes. AUDIENCE: B and C. NORVIN RICHARDS: B and C. Yes, see, you're right. But that's because you know how this kind of thing works. You're supposed to say A and B. Would somebody go ahead and say A and B? I would feel better. AUDIENCE: Is it A and B? NORVIN RICHARDS: Yes, yes, it's A and B. Excellent suggestion. Thank you. But of course, he's right. It is B and C. "Er" Is the Mandarin word for two. "Erku" is the Armenian word for two, and Greek has "duo" as it's word for two. And the way you convince yourself of this is by discovering that wherever Greek has a "dwa," Armenian has an "erk," which is a pretty radical sound change. It took people a while to discover that Armenian was an Indo-European language because Armenian has had really radical things happen to it, with the consequence that its words don't look all that Indo-European anymore. But it underwent this general sound change, and the point is you discover that the sound change is general, and that's how you convince yourself that it belongs to this club. Yeah. AUDIENCE: Why? NORVIN RICHARDS: Why did it do that? So there isn't any really good answer to that, but you could think. If you imagined first that they decided that it wasn't good to have words start with "d" and "w." Like, that's too many consonants at the beginning. We don't want that. We're going to add a vowel. So they'll add a vowel at the beginning. This alternation between "d" and "r" is not such a strange thing to do. So an "r" is kind of-- it's not unlike the relationship between "w" and "u", that "d" is an actual stop, an "ruh," it is unlike that. "d" is an actual stop, and "r," especially if it's a flap, is you're just sort of whacking the part of your mouth that where you make the "d" stop. And then the fact that "w" became a "k," well, we've seen "w" becoming a "gwa" in Welsh and in Chamorro. And a "k" has a stop in a place where a "w" would just be an approximant. So it's maybe not a completely crazy set of sound changes. I should say, it's not always possible to understand why a language underwent the sound changes that it did. There's a beautiful example. I have sometimes used this as a problem set in this class. If you were going to have another problem set, it might have been about this. There's an Algonquian language, Arapaho. If you look at the Algonquian languages, the Proto-Algonquian can be reconstructed to a fair degree of confidence. There are a bunch of Algonquian languages. They're very well studied. There are sound changes. It's not all that hard to relate them to each other. Arapaho is an Algonquian language, but it has undergone sound changes like-- I'm trying to remember them all. "p" has become "chuh." "m" has become-- what has "m" become? "m" has become "b." There's a sound-- so with the consequence that-- oh, no. "m" has become "chuh." They've both become "chuh." they've blended together. So the consequence is that the word for "dog," which is the one that I can remember off the top of my head. There's a word for "dog" that in Wampanoag, which is spoken around here, is "anum." This is a schwa. And in Passamaquoddy is "alum," and in Cree is "ateem." And so there's an "m' here that corresponds, and there are vowels here that have undergone various kinds of sound changes. There's a sound here, which is kind of mysterious. It's unclear how to represent it in Proto-Algonquian. But so here's one Wampanoag. Here's Passamaquoddy. Here's Cree. And the Arapaho word for it is "fench." Yeah. So you've got "anum," "alum," "ateem" and then "fench" is the Arapaho word. So the Arapahos, I don't know what happened to them. But they underwent sound changes that it's very hard to justify to yourself. It's unclear what happened. I was once talking with a linguist about this, and his theory was that what happened with the Arapaho was that they became a horse culture. When horses were brought to the US, the Arapaho got really into horses. And his theory was there must have been a generation of Arapaho who just sort of stopped listening to their parents, It underwent all kinds of radical sound changes because they were too busy riding horses. This is an example where-- thank you. One of you deliberately fell for the trap that I was setting. An example where you convince yourself that these two, the Mandarin and the Armenian words, are not related, despite having some stuff that looks like it's in common. It's really the Armenian and the Greek words that are related. Another example where you have two words that look similar, but the two languages are not related. There's a beautiful memoir by a guy named Dixon, Robert Dixon. He's a linguist who did-- he wrote a book called Memoirs of a Field Worker which was about his work in Aboriginal languages in Australia in the '60s. It's a nice book. And one of the things he talks about is going-- he was traveling around different Aboriginal areas, talking to different speakers of different languages. And there was a particular language that he managed to find possibly one of the last speakers of, a language called Mbabaram, which no longer has any speakers at all. This was in the '60s. And he was very interested in this language. It was supposed to be quite different from some of the languages that were near it. And he finally found a speaker who was finally willing to talk to him and sat down with this guy. And the guy said, OK, let me start teaching you some Mbabaram words. Do you know our word for "dog"? And he was like, OK, no. What's your word for "dog"? And the guy smiled, and he said, "dog." And Dixon was like, OK, this has to be a joke. But it turned out it was not a joke. Most of the languages of the world probably have a word for "dog." There are dogs in lots of places. And "d" and "o" and "g," they're not such uncommon sounds. And 6,000 whatever languages in the world, and yeah, this kind of thing happens every so often. So these are completely unrelated languages. Mbabaram "dog" can be related to-- what is that, a proto [INAUDIBLE] form, "gudaga," which changed in some related languages to other sounds. So you have "gudaga" in Yidiny, and you have "guda" in Dyirbal. And in Mbabaram, they got rid of the first syllable, and they got rid of the last vowel. And before you knew it, you had "dog." English also has a word "dog." It's not from Australia. So we have-- it originally referred to another kind, a particular kind of dog, a mastiff. And it's taken over as our word for "dog." Our older word for "dog" was "hound," which we still have. And this is the kind of thing that happens. So you know, Persian and English have the same word for "bad." Malay and Greek have very similar-looking words for "eye." And it's just a coincidence. How do you know whether you're looking at a coincidence? Well, you ask yourself whether you can posit regular sound changes. So English and Kaqchikel Mayan have words for "mess" that sound more or less the same. That's the kind of thing, if I just tell you in isolation I have two languages that have the same word for "mess," if you want to know, are those languages related, well the way you find out is by trying to find out whether you can come up with sound changes that apply to more than one word. Is it generally the case that where this language has an "m" this other language has an "m'? Where this language has an "s," this other language has an "s"? In the case of English and Kaqchikel, the answer is no. So here are a bunch of other words with "m"s in them, and they don't have "m"s in Kaqchikel. You can't rely on it being the case that every English word with an "m" in it would have an "m" in Kaqchikel, even if the languages were related. But you are entitled to hope that it'll happen more than once. So this looks like a coincidence. Any questions about any of this? Just as an in-class exercise, I wanted to try doing some historical linguistics, reconstructing some sound changes. Here are some Polynesian languages. So they are languages from the Polynesian branch of the very large Austronesian language family. Hawaiian, which is spoken in Hawaii, and Maori, which is from New Zealand, and Tongan, which is from Tonga, and Samoan, which is from Samoa. These are all languages spoken in islands of the Pacific, particularly the eastern Pacific, or not too far from Hawaii-- between Hawaii and New Zealand. And these are all words that look pretty related to each other. And so let's see if we can figure out any sound correspondences. Maybe we can start by looking at the things that I have made green here. What do you think? What sort of proto sound should we posit for the sound that is in green in these kinds of examples? There are at least two theories that we could take seriously-- hypotheses we could take seriously, right? There's a third kind of hypothesis. Maybe it's an "m," right? But let's put that possibility aside for a second. We're not going to do that unless we're driven to it. So the two theories that we could take seriously are maybe it's a "k" or maybe it's a "t." Anybody have a theory about what we ought to do? Joseph? AUDIENCE: [INAUDIBLE] --that maybe the majority rules and it's the "t." NORVIN RICHARDS: Yes, so we could do this democratically and say, yeah, there are more "t"s on this slide than there are "k"s. Majority rules. Yeah. AUDIENCE: I would also go with "t" because Hawaii is a more geographically isolated area. NORVIN RICHARDS: Yeah. All of these-- I mean, these are all islands. Amazing story. These people started-- the Proto-Austronesians are supposed to have started in Taiwan and headed out in outrigger canoes and settled all of these islands in the Pacific, tossing consonants overboard when they got to too heavy, as far as I can tell. But you're right. Hawaiian was near the end of their journey. So they're supposed to have started in Taiwan. New Zealand and Tonga and Samoa were all closer to where they started than Hawaii. So maybe Hawaii-- we're justified to think that Hawaiian might have undergone some of the special sound changes. I can also tell you something special about Hawaiian. I don't know whether this will come up later. Hawaiian does not have a "t." It's one of the few languages in the world that does not, which is why, if you've ever heard, there's a song that is sometimes played around Christmas which involves the Hawaiian version of "Merry Christmas"-- "Mele Kalikimaka." "Mele Kalikimaka" is the Hawaiian version of "Merry Christmas" because Hawaiian has no t," and it also has no "s," So they need to replace the "s" in "Christmas" with something, and what they replace it with is "k" because that's the best they can do. That's the closest they've got to that. So yes, Hawaiian has no "t." So the idea that maybe there was a t," and in Hawaiian, it became a "k," has some initial plausibility. So yeah, we'll posit a sound change in Hawaiian from "t" to "k." And we'll posit a some Proto-Polynesian forms that will have "t" in them. That is, they'll look like the Maori, Tongan, and Samoan forms, and then we'll posit the sound change in Hawaiian that changes that "t" into a "k." So far, so good. Here are some more forms. What should we do with these? So it's either a "k" or it's a glottal stop. Maybe for now we should just sit on this problem because here, just this principle of doing this kind of thing democratically, maybe it doesn't help us. We've got two votes for "k" and two votes for glottal stop. This might be a place where it's easier to imagine a "k" becoming a glottal stop than it is to imagine a glottal stop becoming a "k." "k"s becoming glottal stops, that's a thing that happens pretty commonly in various languages. And in fact, that's what people standardly do in this case. So in Hawaiian, we posit a "k" changing to a global stop in these kinds of examples. Maori and Tongan are the ones that are being conservative here. We'll see an additional reason to say that in just a second, I think. Here's the additional reason to say that. Here are a bunch more words where Tongan has glottal stops in unpredictable places. So at the beginning of the word for "day" and twice in the word for "love" and in the middle of the words for "leg" and "voice." And it's not because-- so if you want to learn to speak Tongan, one of the things you have to learn is that some words begin with glottal stops, and others begin with vowels. So "dawn" starts with a vowel, and "day" starts with a glottal stop. And distinguishing these from each other is tricky. The fact that Tongan is putting these glottal stops in places that can't be predicted-- [SNEEZE] Bless you. Leads us to think that what's going on is that those were glottal stops in Proto-Polynesian, and they've been lost in all the other languages. So here's place where predictability trumps democracy. We can't predict where Tongan will have these glottal stops. We can predict where Hawaiian, Maori, and Samoan will have glottal stops. So Hawaiian and Samoan will have glottal stops wherever there's a "k" in Maori and Tongan. And Maori just won't have glottal stops. So it's always predictable where the glottal stops are in the other languages. It's not predictable where they are in Tongan, so we'll posit them for the proto language. So Aloha in Hawaiian, which literally means "love," in Proto-Polynesian is "'alo'ofa" with two glottal stops in it. And you can see, incidentally, another sound change. The "f" became "h" in Hawaiian and also in Maori. So Hawaiian has undergone sound changes, "k" becoming glottal stops, "t" becoming "k," and glottal stop vanishing. This is sometimes-- so I'll just do that again dramatically with a sagittal section here. So Proto-Polynesian "ata," "dawn," has become "aka." In Hawaiian, "t" has become "k," with the consequence that Hawaiian has no "t." "k" has become glottal stop, so Proto-Polynesian "kula," which means "red," has become "'ula" in Hawaiian. And Proto-Polynesian glottal stop has vanished. This is sometimes called a chain shift. And there are different ways of thinking about chain shifts, but they're not-- cross-linguistically not uncommon. I'll show you another kind of example of one in just a second, where you have sound A becoming sound B, sound B becoming sound C, and sound C becoming sound D. And it's often difficult to determine the order in which things like this happen. One way to think about this is the Hawaiians got rid of Proto-Polynesian glottal stop, and then they were like, now that we have no glottal stop, we're free to pronounce our "k" as glottally as we like. We don't have to worry about the "k" getting confused with the glottal stop. And so the "k" began drifting backwards and becoming a glottal stop. And then they were like, wait, wait. We have no "k" anymore. Our "k" is gone. It's just become a glottal stop. We miss "k." We will change our "t" into a "k." And then who knows what would happen, how they would manage to come up with a "t"? Maybe if we keep an eye on Hawaiian, we'll get to see what they do. It's at least one way to talk about these kinds of chain shifts. Another famous chain shift, which is called the Great English Vowel Shift-- sorry, the English Great Vowel Shift. It is the reason that if you study almost any other language of Europe, you'll learn vowels that have a certain set of sounds associated with them. They are the sounds, more or less, that those symbols have in the IPA. So in lots of languages, this symbol is used for a vowel that's pronounced "ee." But in English, it's pronounced "I." Or that in lots of languages, this symbol is used for a vowel that's pronounced "a," but in English, it's pronounced "ee." This is all because of the Great English Vowel Shift. What was the Great English Vowel Shift? It went like this. Around the 14th century, English had long vowels in these positions. English also had short vowels. We'll put those aside. So there were long vowels, that were just vowels that were held for a long time. And they were in those positions in the mouth. So you had an "ee," an "ay," an "aa," and then an "ah," "aw," "oh," and "oo." We had all those long vowels. And then the "ee" and the "oo" became dipthongs. If you think about holding an "ee" for a long time, your tongue is in a position of maximum tension. It's front and high, and you're supposed to hold it for a long time. And you can understand how something like that might have changed into a dipthong. So instead of holding an "ee" for a long time, you were allowing yourself to begin the vowel with your tongue in some more central position and then just sort of gesture in the direction of an "ee." So you went from "ee" to "uee" to "ayee" to "ai." And so "ee" and "oo" became "ai" and "ow." And then suddenly, English was like, wait. Didn't we have an "ee" and an "oo"? Our "ee" and "oo" are gone. All we have is "ai" and "ow." And so "ay" and "oh" changed. So "ay" became "ee" and "oh" became "oo," filling the gap that the first part of the vowel shift left. And then long "aa" and "ah" both became "ay." That's why long "a" in a word like "fate" is pronounced "ay." It's because, well, the "ay" was off becoming an "ee," and so now we had a space where there was an "ay." And so our "ah" became an "ay." And our "aw" became an "oh." So all of our vowels have undergone this dance. And if you look at modern English dialects, you can see a lot of these kinds of sound changes continuing today. So I come from a place where those dipthongs are getting flattened, where "ai" has a tendency to become "aa." And so the vowels are continuing to chase each other around in the vowel space. I think I talked about this when we were talking about phonetic inventories, that there's work on the kinds of vowel inventories that we find. So there are plenty of languages out there that have three vowels, and the three vowels that you have if you have three vowels are "ee," "oo," and "ah." That is, there's a front high vowel, there's a back high vowel, and there's a low vowel. That is, you have three vowels that are kind of as far apart from each other as possible. There aren't languages out there that have three vowels, and the three vowels are "ee," "ey," and "eh." That's not a vowel system. One of my colleagues, Edward Flemming, has done a lot of work on different kinds of vowel systems and reducing them to this idea that you're trying to keep your vowels maximally dispersed from each other in the vocal tract. And that kind of thinking can be one of the ways to think about why English underwent this vowel shift. It's like, first the high vowels changed into dipthongs, and then you have this sort of unclaimed real estate at the top of the vocal tract, right? So once you no longer have an "ee" or an "oo," once they've changed to "ay" and "ow," there's this desire for dispersion that's pushing you to get some vowels into that space. That first change kind of triggers a second change. If you study languages, one of the things that you will sometimes be told is here's a regular rule for how you form, say, different forms of a noun, singulars and plurals or different case forms or whatever. And then here are some irregular forms. And sometimes the irregular forms are the consequence of sound changes. So for example, when I was briefly attempting to learn Latin in high school, we were taught there are certain Latin nouns, so-called third declension nouns, for which you need to memorize both the nominative singular form and another form. We were taught to memorize the genitive singular form because you can't predict from the nominative singular what the other forms will be. So you get words like "rex," which means "king," and "nox," which means "night," and "vox," which means "voice." But the stems to which you add any other suffix are unpredictable from the nominatives. So king, "rex," the genitive and the accusative and all the other forms are formed off of a stem "reg-" Or for night, it's "noct-," and for a voice, it's "voc-." Anybody else here study Latin? This is all stuff that you have to learn if you're trying to learn Latin. And what I was taught, at least, was to think of "king," the word for king as "rex, regis." We were supposed to just sort of cite both of those. So my old high school Latin teacher, Mrs. McNair, that's what she taught us. Mrs. McNair could have taught us-- she probably knew this, but she didn't reveal it to us-- that although you can't predict the genitive from the nominative, you can predict the nominative from the genitive. So what's really happening in Latin is that you've got these stems, which are "reg-," "noct-," "voc-," and that you're forming the nominative by adding an "s" and then doing some sound changes. So you take "reg-" and you add an "s." That by itself would give you "regs." And then Latin doesn't allow a sequence of a "g" followed by an "s." You make them both voiceless. You change the "g" so that it's "k." and Latin spells the sequence "ks" with an "x." Or "nox," yeah, that would end in two stops followed by an "s." A "k," a "t"-- a "k" sound, which is spelled in Latin with a "c," and a "t" and then an "s." And then you simplify that by getting rid of the "t" in between. Or "vox," which is, well, basically, just "vocs," just adding "s" to the stem. So Mrs. McNair could have made my life easier if she had been willing to tell me about that. So there have to be sound changes that if you're willing to think about the third declension nouns this way-- [COUGHING] excuse me. You can think of them as just having a nominative singular form "s" and some sound changes that obscure the fact that the nominative singular is really not so complicated. She would have had to teach us a bunch of sound changes, but these are some of them. Or here's another example of this kind. We had a problem set earlier on Inupiaq, which is one of a number of languages that are all related to each other. They're all members of what is still, for some reason, called the Eskimo family of languages, where I say "still" because the word "Eskimo" is not well-regarded among the people to whom it refers. It's a name for them that was come up with by the Cree. There are debates about what it means. But it doesn't matter what it means. They don't like being called Eskimos, so it's unfortunate that linguistics continues to refer to their language family as the Proto-Eskimo family. Here are some words in this language family. These are proto forms, and so I should really have stars on them. So "iglu," which is a word you may have heard, it's the word for house. So all of us live in igloos. We all grew up in igloos, as far as the language is concerned, or "tuma" or "tavsi." Here are some words. And there's a very general plural suffix, "t," which gets added. The proto language for this language family had-- let's see now-- four vowels. So there was "a," "i," and "u," and there's also schwa. You can see there's a schwa at the end of the word for "footprint." And then in the history between the proto language and modern Inupiaq, two sound changes took place. First, "t," like the "t" of the plural, but not just the "t" of the plural. It was a general sound change. Whenever you had a "t" that followed the vowel "i," the "t" underwent what's called palatalization, cross-linguistically a very common sound change where a "t" becomes a "ch" around a front vowel. We had something like this happen in English several times. It's why our word for cheese starts with a "ch" sound. It comes from Latin caseus, which started with a "k" sound. So T became "ch" after the vowel "i." That's what you're seeing at the end of the word for "belts." Regular sound change. And then another sound change, Inupiaq has a three vowel system. It only has "a," "i," and "u." They got rid of schwa. And in particular, they changed schwa to an "i." But these sound changes took place in this order with the consequence that if you want to learn Inupiaq now, what you're taught is, well, the plural suffix is a "t," except after some nouns, but not all nouns, that have an "i" at the end of them, where the "t" becomes a "ch." And you just have to learn which nouns that end in "i" change the "t" to a "ch" and which nouns that end in "i"-- the vowel that's written with the letter "i"-- don't do that. So "belt," you just have to memorize, that's the kind of "i" final noun where the plural suffix is a "ch." "Footprint," that's the kind of "i" final where the plural suffix is still a "t." This situation, which is, of course, distressing to people who want to learn Inupiaq, is the consequence of these sound changes occurring in this order and creating this kind of opacity. So what's really going on is, well, there used to be another vowel. And we got rid of that vowel, but we were already committed to having "ch" in certain places and not others. I should say there's now dialect variation within Inupiaq. There are dialects of Inupiaq that said, no, look, this is silly. We're going to change the "ch" in "belts" to a "t." So they still have "t" becoming "ch" in some other places, but specifically the plural has gotten regularized so that it's a "t" everywhere. Consequence of a history of sound changes is sometimes opacity. It's kind of like the Lardil example that I showed you before. You have a change from A to B, and you have another change from C to D. And the consequence of the change from C to D would create new environments for A to B, but A to B has already happened. It doesn't happen again. So as you are learning languages, if you encounter facts like this, entertain the possibility that what's going on is opacity because of sound change occurring in a particular order. I don't know whether you'll consider-- whether you'll have-- whether you'll get any comfort from that thought or whether it'll just annoy you, but that's sometimes the consequence. Yeah, Lardil. We just did Lardil. Oh, yeah. Here's another example of the same kind of thing. Passamaquoddy is an Algonquian language spoken up in Maine. It underwent a sound change that deleted odd-numbered short vowels in words, depending on the consonants around them. It's called syncope. If you started with a word that meant "I hook a fish," that would have been something like "nuh-puhteeheek." And what you did was to get rid of the first and third vowels in that word. So in modern Passamaquoddy, that's now "npuh-teeg." So you've gotten rid of the first vowel and the third vowel in the word. The "nuh-" at the beginning of that is a prefix that agrees with the subject, "I." If you get rid of that prefix, like if you want to say he or she hooks a fish, well then you don't have that prefix and that changes the count. So original "puh-tee-hee-geh," again, you've gotten rid of the first and third vowels in that. So in modern Passamaquoddy, it's "tee-gee-geh." So Passamaquoddy speakers, if they want to learn Passamaquoddy, they have to learn if you add the prefix, you sometimes change which vowels are where. So "I hook a fish" is "npu-tig." But "he or she hooks a fish" is "tee-hee-guh." There's a vowel between the "p" and the "t" in the "I" form, but not in the "he or she" form. There is a vowel between the "t" and the "h" in the "he or she" form, but not in the "I" form. If you look at the online Passamaquoddy dictionary, you see lots of examples of this kind of thing. Verbs are listed in their third-person form, but then there will be a listing for each verb. This is what it looks like when you add a prefix. And it's sometimes radically different as a consequence of this sound change. So for example, here's another instance of the same thing. "I'm sorry about it" was originally "nuh-muh-zah-kay-in." Getting rid of the first and third vowels in that would have given you "nmuh-skey-in," which is the modern way to say "I'm sorry about it." If you don't add the "nuh" prefix, if you want to say "he or she is sorry," you're going to get rid of the first and third vowels. If nothing else happened, that would give you "sah-key-oo." But in fact, "n" next to "s" tends to become a "p." So the word for "I'm sorry about it" is "nmuh-skey-in," but "he or she is sorry" is "psah-key-oo" with the consequence that these look so different from each other that when I have pointed out to Passamaquoddy speakers that they are the same word, they're astonished. They're like, oh, yeah, right. I guess they are, even though they don't look all that much like each other anymore. Now, the Passamaquoddy system looks like the result of a stress system. So again, the rule is get rid of the odd-numbered short vowels in the sequence. And that's the kind of rule that makes sense if you think, yeah, it had a stress system that stressed the even-numbered vowels, right? So "I hook a fish" was "nuh-put-tuh-heek." And then they got rid of the first and the third vowels because they weren't stressed. And "he or she hooks a fish" was "puh-tee-hee-geh," and you got rid of the schwa in the first one because well, it wasn't stressed. Whether you get rid of vowels or not depends partly on what consonants are around them. That's why you're not actually getting rid of the third vowel in this one. You get "tee-hee-geh." So if you thought of Passamaquoddy as having stress on its even-numbered short vowels, then you can understand why it's getting rid of its odd-numbered short vowels. It's getting rid of them because they're not stressed. And Passamaquoddy does have relatives that have stress systems like that, where you stress the even-numbered vowels. But that is not Passamaquoddy stress system anymore. So Passamaquoddy underwent all of these sound changes because of the stress system that they used to have, that they used to share with some of their relatives. And then they changed their stress system. Their modern stress system goes "Stress the first vowel and stress every other vowel counting backwards from the end." So you get words like [INAUDIBLE],, which is "he or she talks like that," where you're stressing the first vowel and the second vowel. First vowel because you always stress the first vowel, and the second vowel because it's two vowels back from the end. So every other vowel, starting from the end. If you make it one syllable longer, then you're going to stress the first and the third vowel-- "wee-guh-west-too," "He or she talks while walking backwards." "Quiguh-west-too-bin," "You and I like talking." So stress is on the first syllable, and it's on the even-numbered syllables counting backwards from the end of the word. So they've innovated this new stress system after having this older stress system which created all of these alternations in the positions of vowels. There's a reason that Passamaquoddy is difficult to learn. There are several reasons. This is one. So it used to have a different stress system, during which the syncope rule applied. And now it has the system it has now. I want to end today because I think we have enough time for me to heap scorn on a couple of things. I want to end today with two bad ideas. So one of the reasons that I'm talking to you about historical linguistics-- there are several. One is that it's fun. It's kind of interesting to try to trace back these sound changes. But another is that historical linguistics is the kind of thing that is sometimes done badly. So you'll see in popular press people saying things like, hey, this language is related to that language, or this language has borrowed this word from that language. And if you have any skepticism about that, what you want them to do is show you regular sound correspondences or regular rules for how words are related to other words. So there was a popular book for a while that proposed that the Native American languages were all descended from Chinese, that the Mandarin-- the Chinese had encountered North America first, and that their language had taken over North America, and that all of the modern languages were descended from Chinese. And the examples were things like in modern Mandarin, there is a greeting, "nihao." And hey, if you watch Hollywood movies, you will hear Native Americans greeting each other with "how." And there, you know, so clearly these are related to each other. "How" actually is a greeting in Lakota. It's a greeting that Lakota men use to greet other men. It's not related to the Mandarin expression "nihao." And the reason I can say that with so much confidence is that, well, no one has ever found sound correspondences or anything like that relating anything else in Lakota to anything else in Chinese. There are slightly less silly claims that people have made using what's sometimes called megalocomparison. So let's skip-- so glottochronology-- I'll leave this on the slide, but glottochronology is an attempt to sort of carbon date language splits. So the idea is you take a list of basic vocabulary. You figure out how many cognates the two languages share on the list, and you start with the assumption that cognate loss happens at a constant rate. This is the figure that has been offered by glottochronologists. And then you do some math. And this is something that you see people seriously saying. So if you look in-- historical linguistics is usually not done by linguists. It's often being done by biologists. They'll say, we know that these languages are related at a time depth of X. And what they mean is we've done glottochronology, which starts from the assumption that you can do this kind of carbon dating of languages by looking at cognate lists and figuring out the rate at which two languages cease to have cognates with each other. The problem is that the base assumption-- that cognate loss happens at a constant rate-- is false. And no one has figured out a way around that. There doesn't seem to be any way to correct for it. So if you see people using glottochronology, you should treat them with skepticism. The other bad idea that's in here is megalocomparison. This is attempts to reconstruct Proto- World, which were very en vogue at a certain time. People thought if we do lots and lots of this cross-linguistic comparison, we'll be able to figure out the ancestor language for all of the languages in the world. The work has never involved discovering general sound changes that relate languages to each other. It always involves going back to saying, "Hey, look, that word in that language looks kind of like that word in that language! I bet they're related." That's how that work is always gone. So those are two ideas-- bad ideas that you should be skeptical about. This has been my public service announcement for today. Are there any questions about either of those bad ideas? I put more detail about the bad ideas in the slides. OK, cool. Go forth, and I will see you next week. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_15_Syntax_Part_5.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: Sorry. So here's syntax five. If you're wondering whether we're ever going to stop doing syntax, we are, but I'm not sure when. I think we have at least one or two more days of syntax. So if you're looking at the syllabus to try to figure out where we are, just don't do that. I'll try to update the syllabus soon to give you some idea of what's going to happen in the future. Last time, we got started on something that I promised you at the beginning of the semester. Sorry. I'm just going to close this real quick. I said at the beginning of the semester that one of the kinds of things we were going to find out was that, although there were lots kinds of languages out there in the world and we were going to have a chance to look at some of them, that one of the things that we find as we study these-- I turned out not to be able to close that door-- Let me try that again. One of the things that we find as we study languages of the world is that, although there are many languages out there-- oh, well-- I've been defeated by a door. This is very sad. That although there are many languages out there, there are kinds of languages that are easy to imagine but that don't, in fact, exist. And there's a hypothesis about what's going on, which is that part of being a human being is having the kind of mind that can construct language in some ways, but not others. That is, given a certain set of data, the human mind jumps to certain conclusions but not others about this system that's underlying those data. And one of the things that we're trying to find out as we study linguistics is about what it is about human minds that makes them work this way. What are the rules that our minds are using to make sense of the limited data that we have? We ran very quickly through one argument of that kind last time because we were running low on time, and I thought, maybe we have time for this argument. And probably, looking back, we probably didn't. If you are looking at the-- if you get a chance to look at the video of that, if anybody wants to, you should look at it on slowed down speed to see if you can get it to make sense. So I was going to go through that argument again and some other arguments a little more. That's going to be a theme for today. I will show you some areas in which languages vary, and then we will imagine kinds of languages, and we'll discover that those kinds that are easy to imagine don't happen. So we're going to look at that. This is 24.900. If you go on and take more linguistics classes, you will see more phenomena of this kind and more theories about why languages behave this way. But trying to understand why languages behave this way is one of the central occupations of linguistics, trying to understand basically what human minds are like. So last time I showed you the slide-- this was a slide for the question, "What will Mary write?" I was trying to convince you that, in a sentence like this where we think that "write" selects an object-- it has the option of being transitive-- you can say Mary is writing a novel. And we thought that when you had a selection relation between the head and some phrase, that the phrase needed to be the sister to the head. At least, if there was only one thing that it was selecting. We spent a lot of happy time thinking about what happens when a head is selecting more than one thing. And I promised you that we would talk more about that later. But that promise did not apply to today. So today, yeah. If a head is selecting only one thing, that thing needs to be the sister. And then we looked at questions like this, and I tried to convince you to be upset. So the word "what," which seems to be the thing that's being selected by "write," it seems to be the thing that's getting-- that is the object of "write"-- isn't anywhere near "write." What I said was there is a standard way to deal with this problem, and we're going to gradually accumulate evidence for this idea. But for today, I'll just show you the idea. The idea is that, indeed, what does start off as the sister of "write" goes right where the projection principle says it should. It's merged right away. As soon as "write" is introduced, first thing you merge "write" with is something that "write" is selecting-- its object. But then there is an operation that obscures that, destroys that relation, takes that thing which starts off as the sister of "write" and moves it to this higher position in the clause. I said that it moved to the specifying of CP. That is that position that's there on the tree. So it becomes the daughter of the CP node. CP, maybe you remember, is the projection that's headed by words like "that" and "whether" in sentences like "I think that it is raining," talking about that as being the place where we would put words like that. Does this all sound familiar and not terribly upsetting? Is there anything in any of this that makes any of you want to demand further explanation or even ask nicely for further explanation? Anything else like that? No? OK. So one of the-- so wh-movement. Cross-linguistically very common. Lots of languages out there that have this operation that takes these wh-words, these question words with meanings like "what" and "who" and "where" and "why" and puts them at the beginning of the sentence. So hearing examples from English and Tagalog and Finnish. The language that you're working on could be a kind of thing that you could try to find out "How does it do its wh-questions?" Because there is cross-linguistic variation-- we said this last time too-- with respect to how you form the wh-question. So there are languages, like English, which do wh-movement usually. We talked about special conversational circumstances under which you might leave a "wh" in-situ. So there are times when it's possible to say "Mary wrote what." But it specifically has to involve either not having heard you say exactly what Mary wrote or being amazed. So you say, "Mary wrote an Estonian novel." And I say, "Mary wrote what? I didn't know she spoke Estonian." But except under those special circumstances, the normal thing to do with wh-words in English is to put them at the beginning of the sentence. There are other languages, like Mandarin or Bafut-- which is a language of Cameroon-- or Hopi-- which is a language of the American Southwest-- many, many languages which don't do that. They have what's called "wh in-situ." That is, the wh-phrases just stay where they would normally be in the language in question. So Chinese word order is not unlike English word order. You've got the subject and then the verb and then the object. And the word for "what" there is sitting right where an object would normally be if you wanted to say "Zhangsan bought an Estonian novel" in Mandarin-- I have no idea how to say that. I don't speak Mandarin well at all. The words for "Estonian novel" would go right after "bought." It would go right where "what" is. Yep. Same deal for Bafut and Hopi. Hopi word order is different. The verb is at the end of the sentence, but the generalization about all these languages is that the wh-phrase is just staying where it is. There is no WH-movement going on. And what I said last time-- and this is true-- is those seem to be the options, basically. You either put your wh-phrase at the beginning of the sentence when you do wh-movement or you leave it where it is. You could imagine other things that you would do. You could imagine a language where the wh-phrase would move to the end of the sentence or a language where the wh-phrase would have to be in the middle of the sentence-- you'd count the words in the sentence and make sure it was evenly in the middle of the sentence. There aren't any languages like that. Now, so here's an example. There are no languages in which the normal way to say "who has eaten the cookies" is "has eaten the cookies who?" where I've put in all these silly diacritics to emphasize the fact that I'm making this up. There are no languages like this. This is not a real language. So I'm making up conlang. Yeah? AUDIENCE: Are there any languages in which [? move ?] to the beginning of the sentence or just the in situ [INAUDIBLE] are they both [INAUDIBLE].. NORVIN RICHARDS: Yes. Yes. Excellent question, and I should have said that. There are languages that allow you to do either one. French is a language like that, for example. And, in fact, it's not uncommon for languages like that to-- I mean, the first thing to say about them is that they have both options. It's not uncommon for one of the options to be more common or to be less restricted. So French allows you to do wh-in-situ, but not always. There are restrictions on it, and people get very interested in the restrictions, trying to figure out when do you have to wh-move in French. But yes. The short version of the answer to your question is yes. Other questions about this? OK. So I started this off by saying we're going to spend some time talking about languages that don't exist. Here's a language that doesn't exist. There aren't languages that take wh-phrases and move them to the end of the sentence. And if you're syntactician, you want to know, well, why? Why aren't there any languages like that? It's not hard to imagine them. Now, when I am trapped on an airplane and the person next to me wants to talk to me, and, for some reason, they want to continue to talk to me after I tell them that I'm a theoretical syntactician. If I tell them this, if I tell them about this kind of contrast, they sometimes respond by saying-- sometimes they say, "Oh, yes. That's very interesting...." And then they start reading their novel. But sometimes they respond by saying, yeah, but look. In a wh-question, the wh-word is the important thing. So maybe it's just-- if you ask, why aren't there any languages like this, maybe it's just if it's important, then you want it to be first, something like that. The theory doesn't usually get any more sophisticated than that. So if you're going to do anything with a wh-phrase, maybe it's not so surprising that it's first. And so I actually want to show you a kind of apparent counterexample to what I've just claimed. Because the fact is that there are languages that arrange for the wh-word to be at the end of the sentence, it's just that they are always wh-in-situ languages. Let me explain to you what I mean by that. So in order to tell you what I mean by that, I first have to tell you another point of variation between languages. There are languages in which the normal way to form wh-questions is it involves what's sometimes called a cleft. It's as though you have to say in these languages "What is the one that you bought?" or What is the thing that you bought?" Some of the languages that are like this only do this for some of their kinds of wh-questions. Some of them do it for all other kinds of wh-questions. So these are languages in which you can't literally say "What did you buy?" You have to do this more complicated structure. I'm not going to try to give you a tree for this more complicated structure. But something like this. Something that involves more than one clause, basically. So you have a "what" at the beginning of the sentence. But inside that sentence, there's this other clause "that you bought." That's a CP. Look, there's a C at the beginning of it-- that. In English, we can ask questions like, "What is the one that you bought?" But in Tagalog, for example, you have to. So you can't just say, "What did you buy?" in Tagalog. You have to say, [TAGALOG] where the [TAGALOG] is literally something like "the (____) you bought." So that [TAGALOG] is the kind of thing that only ever goes before noun phrases. So this is the Tagalog for "What was the thing that you bought?" or "the one that you bought." And Tagalog, the word for "one," the word for "thing" is null. It's not pronounced. I think I got asked last time. Are there ever things in trees that are not pronounced. And here's an example. The Tagalog word for "one" is such a thing. AUDIENCE: Question. So there are [? a ?] type of sentences which is more like, if you didn't hear something, you want to clarify. For example, you want to say the question of, what? NORVIN RICHARDS: Yes. Yeah. AUDIENCE: And I see that there's slightly different structure. It's very [INAUDIBLE] to say, [INAUDIBLE] or something. NORVIN RICHARDS: Yeah. AUDIENCE: But how does that fit with the fact that wh [INAUDIBLE]. NORVIN RICHARDS: Is never to the right? Yeah, so that's a really nice example. The fact about questions like that is that the wh-word is sitting where it would normally be. It's not moving at all, because it doesn't have to be at the end of the sentence. If I say, "Mary bought an Estonian novel in Paris," your response to that could be, "Mary bought what in Paris?" So the word for "what" isn't going at the end of the sentence. In fact, it can't. It's not the natural way to-- that's the natural way to respond to that. You don't say, "Mary bought in Paris what?" So the word "what" isn't going at the end of-- it happens to be at the end of the sentence in your example, but what it's really doing is just not moving. It's just staying where it is. That's a really nice point, which I was planning to make later, but thank you for making it now, that we have to draw this distinction between just sitting there and moving to the right. Yeah, so in those kinds of questions, it's not moving to the right. Nice, nice example. Does that answer your question? Yeah. Yeah, no questions? OK. OK, so this is the setup for this point. There are no languages that move wh-words to the right, but I'm going to show you a weird wiggle that obscures that fact sometimes. So first, there are languages like Tagalog in which you can't ask, "What did you buy?" You have to say something like, "What was the thing that you bought?" And I just have to ask you to trust me that that's the best analysis of this Tagalog question. I work on Tagalog. I'm happy to talk about this at great length. Talking about Tagalog is one of my favorite things to do. So don't ask me any questions about this Tagalog question. That's very dangerous. If you want to learn anything else today, we have to get away from this slide, OK? All right, now, imagine-- so Tagalog is a language that has these clefts. It has these bi-clausal ways of asking questions, wh-questions. And it has obligatory wh-movement. So the Tagalog wh-words, they're just like the English wh-words. They have to go at the beginning of the sentence. Yeah, now imagine what a language would be like, though, if it had clefting and it had wh-in-situ. Well, then, it would have questions like this. It would say things like, "The one that you bought was what?" Or, "The one that ate the meat was who? That is, you'd have that embedded clause, the embedded clause of the cleft, and then the wh-word-- which is kind of the predicate, the thing that's being described as the thing that this embedded clause applies to-- would end up at the end of the sentence-- or at least it could, if that's where predicates went in the language in question-- but not because the wh moved there, just because it was in situ. It would be like Vlada's example of a second ago, right? So Vlada was saying, here's an example where the wh-word is at the end of a sentence. And I was saying, yes, but it's by accident. It's really just staying in situ. It's not moving to the end of the sentence. Does that make sense? I'm saying all this because there are languages like this-- languages where the wh-word is at the end of the sentence, not because it's move to the right, but because, well, the language has clefts, and the wh-word always ends up at the end of the sentence not because it moves there, but because it's in situ. Kabardian is a language like this. And if you thought I was going to pronounce this Kabardian example for you, you have another think coming. I have no hope of pronouncing this Kabardian example. But this is how you say "Who ate the meat?" in Kabardian-- the only thing I know how to say in Kabardian where I don't in fact know how to say it. I only know what it looks like. I can't pronounce it. And it is literally, "The one who ate the meat is who?" Like that's how you say that in Kabardian. So there actually are languages that put their wh-words at the ends of sentences. Kabardian is one. But the structure never involves wh-movement to the right. It's always clefting with the wh-in-situ. The conversation with the person in the airplane is always over by this point. But if they would listen to me, this is what I would tell them. It's yeah, you don't want a story that says, wh-words-- it's natural for wh-words to be at the beginning of the sentence. First of all, you then have to tell me what the heck you mean by that. But there actually are languages in which the wh-word always ends up at the end of the sentence. It's just that it doesn't move there. It's always in situ when it's there. Faith? AUDIENCE: So I'm thinking about in Spanish, how a direct object pronoun can go before or after a verb. NORVIN RICHARDS: Yeah. AUDIENCE: But there's always, like, leftward wh-movement, because you could say, look, "Lo quiero decir" or "Quiero decirlo," but it's always going to be "¿Qué quieres decir?" Is there a reason for that? NORVIN RICHARDS: So Spanish is a language that has been described as having wh-movement. Usually, there's supposed to be contexts in which it's OK to say the Spanish equivalent of "and you bought what?" where the wh-word ends up at the end of the sentence. And those are supposed to not be exactly like the English cases of being astonished. What you really wanted to know about was the pronoun-- like why can you say, in Spanish, either "He wants to say it," or "It he wants to say," Right? So Spanish has these words-- they're called "clitics" in the literature. It's common for pronouns to be like this in a lot of languages where they have special rules about where they go in the sentence. I have only shown you one kind of movement, wh-movement, but what you're pointing out is that there are other kinds. So Spanish is a language that more or less has to move its wh-phrases, but its clitics have special conditions on where they go that are distinct from its-- from a-- It's important for your example that it involves an infinitive, I think-- that if you wanted to say, whatever, "he said it," that you would say-- what would you say-- "it"?-- "Lo dijo." yeah. So you wouldn't say "Dijo lo." AUDIENCE: Yeah, I think that is a thing in Castilian Spanish from the 1600s, is the [INAUDIBLE].. NORVIN RICHARDS: It's a point of variation between Romance languages. So there are Romance languages-- and languages, generally. There are Romance-- so these are languages in which the clitic wants to attach to the verb, and whether it goes before or after the verb often depends on whether the verb is an infinitival verb or not. And for Spanish, I think, the rules are it goes before tensed verbs and after infinitival verbs. So your first example had both an infinitive and a tensed verb in it, and the clitic had two options as to where it could go. If we were developing a complete theory of clitics-- which, just to be clear, we are not-- we'd want to have an account of that, figuring that out. The Phenomenon is called "clitic climbing." The clitic that belongs to the embedded verb gets to climb up and attach to the first verb. Was there another hand? I thought all of you had hands, but I guess they're all sitting peacefully. OK, cool. OK, so language universal, there is no true wh-movement to the right. There is Kabardian, so wh-in-situ combined with the obligatory clefting, with the consequence that the wh is at the end of the sentence. But there's never a wh-movement to the right. If you're a syntactician, you look at this fact about the world, and you think, well, why? Why is wh-movement, why is the specifier of CP the-- we go back to the tree over here, this tree-- that wh-movement that lands in the specifier of CP that makes the noun phrase, what, a daughter of CP? Why does that daughter of CP always precede C-bar? Why can't it follow C-bar? There's no language anywhere on Earth where it follows C-bar, and it would be nice to know why. So there's syntactic work on why, which I will not try to show you. If you would like to know more, take more linguistics classes. OK, all right. And then, yep-- and then, and this is the part that I went through too fast last time. And so I'm going to go through it a little more slowly this time. I don't know. We'll see. That kind of generalization about languages not being completely varied, it's not the case. There-- there's a phenomenon, wh-movement, and there are-- there's more than one kind of language. There are languages that do wh-movement like English, and there are languages that leave the wh-in-situ, like Mandarin or Kabardian, but that's it. Those are all the kinds that there are. And there are imaginable kinds, like the mirror image of English, where the wh-phrase goes at the end, which don't exist. And we'd like to know why. I want to show you another example of the same kind. I showed you this last time, but I'll show it to you again. There's also variation with respect to how you do multiple wh-questions, So wh-questions where there's more than one wh-word in the sentence. First of all, there are languages that just ban them. That's one kind of language. So Irish doesn't allow them. Italian, at least until-- apparently, young Italians are beginning to develop multiple-wh questions. [LAUGHTER] But older Italians don't have them. Kids today and their multiple-wh questions! And then, so for languages that do have them, they're languages like English in which the rule is, English has wh-movement, as we know, and if you have more than one wh-word, the rule for English is that you move one of them. So you ask questions like, "What did you give to whom?" Or "What did you give to whom when?" If you have three wh-phrases, you're only going to move one of them. That's the generalization about English. And then-- I told you this last time-- there are languages like Bulgarian and Mohawk in which all of the wh-words move. So in Bulgarian, you must say, "What to whom did he give?" That's that second question. Or in Mohawk, you say things like, "Tell me who what bought." You can't leave either of the wh-phrases in situ. And so here's the place where I promised you, there's more than one kind of language, but there isn't every imaginable kind of language. So there are languages like English that move one wh-phrase. There are languages like Mandarin that don't move any. And there are languages like Bulgarian and Mohawk that move all of them. But 0 and 1 and "all" are all of the options for moving wh-phrases in multiple wh-questions. And you all know more math than I do. You know of numbers besides 0 and 1 and, well, "all," which I guess isn't a number. So you could imagine a language where the rule is, move any number of wh-phrases up to a maximum of 2, right? Or move any number of wh-phrases, I don't know, as long as it's prime, or whatever-- as long as it's one of the first 10 digits of pi. All kinds of imaginable languages that you could imagine, and none of them exist, right? So there aren't languages out there in which you say "Who gave a book to Mary?" and you say "Who what gave to Mary?" but when you get to three, you have to say "Who what gave to whom?" That language doesn't exist. It's not hard to imagine. You could write a grammar for it. It doesn't exist. You don't find it. And then, this is where I gave you the parable of the function, which was meant to be a parable about what life is like when you are a baby. So imagine, I said, that you are a Bulgarian baby, and you're hearing your parents utter multiple wh-questions. There's some limit on the amount of data that you get-- not just if you're a Bulgarian baby listening to multiple wh-questions, but in life, right? There is some-- it may not have seemed like it at the time, but there is some maximum amount of stuff that your parents said to you when you were growing up, and the stuff that you heard-- not just your parents, but the other people that were talking to you when you were growing up and acquiring your native language. There is some body of data-- maybe it was fairly large-- of things that people said to you. Maybe fairly large, but it was finite, so there was some maximum number of wh-words in the questions that you heard growing up. And the thing about finite data-- so the point of the parable of the function-- is here, I am giving you finite data. The function returns 1 for 1, and 2 for 2, and 3 for 3, and 4 for 4, and you have no idea at all what it should return for 5. Given what I've given you so far, you might hope that you're living in just and merciful universe where people, data work the way they should. But they're just-- as a matter of logic or a matter of math, there are no restrictions at all on what the consequence of giving this function 5 could be. So I showed you one example, one function for which the output would be 29, and I could multiply that first thing, the thing that comes right after the equals sign, I could multiply that by any number at all. The consequence of f of 5 could just be anything. And life is kind of like that. This is sometimes called the problem of induction when people talk about it in the context of science. So you're going to make some number of observations of the world, and you'll maybe see the same thing over and over again. You'll see that, as you drop a weight, that the weight falls down, and not sideways or up. And you see that over, and over, and over again. And eventually, you start to hypothesize that down is the direction that they're going to fall, that every crow you see is going to be black, that every swan you see is going to be white, because you keep seeing that over and over and over again. But it's kind of like concluding that f of 5 is going to be 5. It should work that way given everything that you've seen, but it might not. Maybe the next weight you drop will fall upwards. And so part of science is saying, we're going to draw these laws. Given the inevitably finite data that we've got, we're going to sketch these laws about what we think is going to happen. And then we go on doing science and refining our laws. Raquel, are you-- yeah? AUDIENCE: The thought that I am thinking is something along the lines of like, is it possible that part of being human is developing languages that are kinder than giving you "f of 5 is 29," and that maybe it's like you could give languages a weird difficulty coefficient, and then the coefficient, if it's really low, tells you that it'll probably be something intuitive like five. If it's really high, then you can't rely on the language to be nice to you when it could be 29? NORVIN RICHARDS: Yeah, that's-- you're saying it very well. Maybe another way to say it, to repeat what you just said, is yeah, human minds are set up in such a way that f of 5 is 5. Human minds are set up in such a way that multiple wh-questions, there are only a few options. Maybe I'm learning Italian and multiple wh-questions are just banned. Maybe I'm learning Mandarin and wh-phrases all stay in situ. Maybe I'm learning English and one of them moves. Maybe I'm learning Bulgarian and all of them move. And that's it. Those are the only options there are. Which means if you're a Bulgarian child, and you're growing up, and you're hearing your parents ask multiple wh-questions, there's some maximum number of wh-words you ever heard them say at any point in their life. And that number-- suppose that's the most inquisitive your parents ever got in your life, It was a two-wh-word question. I can say, incidentally, multiple-wh questions-- I wrote my dissertation about multiple-wh questions. They're extremely interesting, but they're not common. If you spend some time listening, you'll spend a long time before you hear somebody say this in the wild. It's not a common kind of question to ask. I happen to know, for example, that there are no multiple wh-questions in the Bible. So I have this project-- [LAUGHTER] --that involves going through the Bible verses. So there's a language called Wampanoag, which is a language that's spoken around here traditionally. The Wampanoag are trying to revive their language, and one of the texts that they have is a complete Bible translation. It was the first Bible published in this hemisphere. It was a Wampanoag Bible that was published here in Boston in the 1600s-- 1635. And so I have been reading the Bible very slowly and carefully, trying to put the grammar of Wampanoag back together and writing a dictionary. And so I really wish that there were multiple-wh questions in the Bible. It would answer certain questions about Wampanoag. I would really like Jesus Christ to turn to the disciples and ask them "Who bought what?" It would be great. [LAUGHTER] But he never does, never. I have checked. There are no multiple-wh questions in the Bible. And the Bible is quite large, right? There's lots and lots of texts. I know this because I've been reading it. It's taken me years. So these are not common. It's not at all-- it wouldn't be at all surprising if the largest multiple-wh question you ever heard your parents say had two wh-phrases in it-- something like "Who bought what?" or "What did he give to whom?" Maybe you heard your parents at some point say, "Who gave what to whom, when, why?" It's not likely. And so I think I said this last time. You might have expected if children had to consider all of the imaginable grammars that are consistent with the data that they have, you might have imagined that Bulgarian children would have-- Bulgarian adults would have different grammars, that there would be Bulgarians who guessed that the grammar was move your wh-phrases up to a maximum of two, or move your wh-phrases up to a maximum of three, or move your wh-phrases up to a maximum of four. But that's not what we find. So any Bulgarian that you ask, as you ratchet up the number of wh-words in the question, they just always move them all. The only choice they all made was all. Yeah, so here's a case where the data that anybody is exposed to are necessarily limited in a way that means that there are literally infinitely many possible grammars that are compatible with the data that they have. And yet, they all converge on the same answer. So this is a case of what Raquel was just talking about. It's as though being a human being means having the kind of mind that considers some possibilities but not others. So you consider all, but you don't consider up to a maximum of two. That's not a kind of grammar you think about. That's apparently what we're finding. Raising questions like, well, why? So what is it about the human mind that makes it like that? That's one of the kinds of things linguists think about. OK, similarly-- and this is echoing a point that Vlada made a second ago-- if you're learning Chinese, if you're growing up learning Chinese, you may hear your parents say something like this at some point. And if you are considering all of the logically possible grammars that are consistent with this, you might have to spend some time thinking, well, am I learning a language that moves its wh-phrases to the right, or am I learning a language that leaves its wh-phrases in situ? And so you might expect that there would be a stage in the life of Mandarin children where they entertained the possibility that wh-phrases move to the right. It would be a cute kid thing to do. Oh, our child is so cute. When she was three, she thought wh-phrases moved to the right. She was saying things like-- AUDIENCE: Aww. NORVIN RICHARDS: Aww, yeah. [LAUGHTER] But no, I mean, Chinese babies are cute like babies everywhere, but they do not make that particular cute mistake. If you've ever been around small children, you know children do make mistakes. They take guesses that are wrong about how the adult language works, and they spend some time making adorable mistakes. But the children do not make that particular adorable mistake because they are human beings, and they know that there are on some level-- their heads are structured in such a way that they cannot entertain grammars where wh-phrases move to the right. There aren't any of those. Part of our job as linguists is to come up with a theory that explains that. And when I say it's part of our job as linguists, I do not mean that I'm now about to tell you the theory. That's an active topic of research, people trying to figure out why grammar works that way. OK so yeah, if you are a Chinese baby and you hear that sentence, you know that you are hearing a wh-in-situ language. You don't have to entertain possibilities like a language in which the wh-phrase moves to the right, or, a language in which the wh-phrase is the third word in the sentence. Those are both grammars that would generate that question, but you don't spend any time entertaining those possibilities. Yeah? AUDIENCE: OK, well, this is going to be probably a bit like what if. NORVIN RICHARDS: Yeah. AUDIENCE: Probably never encounter it. NORVIN RICHARDS: Yeah. AUDIENCE: Because you'd think that nowhere does [INAUDIBLE] Chinese babies say, or like, think, I must move word to end. NORVIN RICHARDS: Yeah. AUDIENCE: What if, hypothetically, a baby speaking in an English-- a baby in an English-speaking environment never heard an in-situ-- NORVIN RICHARDS: wh, mm-hmm. AUDIENCE: OK, often-- OK, how do I phrase it. What if the only-- all the questions the baby hears is like, "Who did this," "Who did that," "What happened?" NORVIN RICHARDS: "What happened," yeah. AUDIENCE: Where it could plausibly-- NORVIN RICHARDS: Be in situ. AUDIENCE: Be in situ. NORVIN RICHARDS: Yeah. AUDIENCE: Could they potentially realize, or like think maybe it's not in situ. NORVIN RICHARDS: Nice question. So everything I've told you so far makes it sound like you could fool babies, in theory, in that way. Like you say, it's hard to test because, in fact, I mean you probably can't recruit parents to agree never to ask questions like "What do you want for lunch?" But you're absolutely right that you could imagine children going through, probably, a pretty brief stage where they thought that was a viable option. In a way, your question gets at something else that I said, which is languages-- so I've said there are different kinds of languages. There are wh-in-situ languages, there are wh-moving languages, and-- this was Faith's question-- there are languages that allow either option. That's also a kind of language. And that's all I said about that right. You could now ask, does that just mean that if I want to learn English, I must learn it's a wh-movement language? Or does the fact that it's a wh-movement language have anything to do with anything else about English? Is that just a parameter we have to state about English, or does it follow from other stuff about English? And that's also an active research question, is whether we can get away without having to just state those parameters, or whether we can connect that to other facts about English. That would mean that an English-speaking baby would have other sources of information, possibly, besides just actually hearing wh-questions. So just to make it more specific what I mean, so here's something which is not true. If it turned out that every language in which the subject comes before the verb is a language with wh-movement-- now, that's false. So Mandarin is a language where subject comes before the verb, and it has wh-in-situ. But imagine that it turned out that every language where the subject comes before the verb is a language with wh-movement. , Then an English-speaking baby wouldn't have to hear any wh-questions at all. They'd just have to hear that the subject comes before the verb, and then they would be like, oh, it must be a wh-movement language. So if it were possible to connect the fact that English has wh-movement to some other fact about English, then babies would have other sources of data. Now, like I say, that theory that I just offered as a toy theory, that's not the right theory. So we have to work harder to find the right theory. But that's the kind of theory that would give you more data, yeah Yeah? AUDIENCE: So you said babies never consider that it could be [INAUDIBLE] oh, a maximum of two-- NORVIN RICHARDS: Yeah. AUDIENCE: --that move to the left. But how do we know that it's not just like-- that's not the data that the baby would get? A baby that has spent eight or nine months just taking in sound. NORVIN RICHARDS: Data? AUDIENCE: And before they even speak a sound. NORVIN RICHARDS: Yeah. AUDIENCE: So how do we know that it's just like, oh, this was not part of what was presented as data to the baby? NORVIN RICHARDS: So I don't know if this is going to address your question or not, but let me try. No matter how long we let the baby spend learning Bulgarian, no matter how much Bulgarian they hear, there's some maximum number of words they heard in a wh-question. It might not be two-- it might be three, it might be four-- but it's some number. And so you might have expected that there would be variation among Bulgarians because any number, whatever the number is, it doesn't uniquely pick out a grammar. There's always, actually, infinitely many possible grammars. So if the number that they heard was n, if that's the maximum number, then it could be "Move them all." It could be "Move them all, up to a maximum of n." It could be "Move them all up to a maximum of n plus 1," or n plus 2, or plus 3. Yeah, it could be, again, infinitely many possible grammars. And so you might have hoped, you might have expected, that there would be lots of different adult Bulgarian grammars. But there aren't; there's only one. It's "move them all." Is that getting at your question? AUDIENCE: Probably, yeah. NORVIN RICHARDS: So it just doesn't matter how much data the babies have. It's never enough to uniquely determine what the grammar ought to be. It's got to be the data plus something in the baby's head that says, the only options are don't move any of them, move one of them, or move them all. Those three are the only options because no matter how much data they have, they'll never uniquely determine the right data, the right grammar. Yeah? AUDIENCE: I don't know a lot about this topic, but isn't there some merit in studying constructed languages and the wh-movements that exist there? Because then you have a lot of liberty and get to put it wherever you want, I would imagine? NORVIN RICHARDS: Yeah. AUDIENCE: And so you could ask, is that consistent with the order in any [INAUDIBLE].. NORVIN RICHARDS: So I don't know if this is getting at your question either, and you should stop me if it doesn't. No, you should just let me ramble on, and then you should tell me afterwards whether it did or not. That's what you should do. There are experiments in which people try to find out-- in which people ask people to learn languages. So they'll give people languages, and they'll find out things like, if I give you a language that is a possible human language, are you faster at learning that kind of language than you would be if I give you a language that's not a possible human language? So if I give you a language in which the rule, is move all your wh-phrases, how are you at learning that as opposed to a language in which the rule is, move all your wh-phrases up to a maximum of two? And I don't know whether that kind of experiment has been done for this particular problem. But what we would hope is that, yeah, it's harder to learn the languages where it's up to a maximum of two, the languages-- people do do this kind of experiment. The other kind of natural experiment that's been done-- this is a lot more anecdotal-- there's, what are people like this called now? There's a savant, so someone whose mental processes are mostly very slow. He needs a lot help with daily life. I think he does live by himself, but he has caregivers who come and help him out. But he's a genius at learning languages. And so I don't know his real name, but his name in the literature is Christopher. He's British. And Christopher loves learning languages, does it for fun. You can give him grammars, and he will learn languages immediately. It takes him a day, and suddenly, he can read and write. And he's not good at speaking the languages partly because he learns by reading books but he can learn to read and write languages extremely quickly. And they have done some experiments like this with Christopher where you give Christopher languages that are not possible languages to learn, and he can't do that, it turns out. So he can learn languages as long as they are languages that the rules of universal grammar allow. Now, again, I don't know if they've tried "Move all your wh-phrases up to a maximum of two" with Christopher. But that's another kind of experiment that's been done. Yeah? AUDIENCE: How many languages will Christopher have learned by the time [INAUDIBLE]?? Because there's maybe a possibility that Christopher unconsciously figured it out that it has to be none, one, or all. That's not from Christopher's experience-- NORVIN RICHARDS: His wide experience with previous languages? So all I can say is that they have given him languages-- they have worried about this. So they have given him languages-- thank you for figuring out the doors. [LAUGHS] They have given him languages that differ from the languages he's learned in some regard. Most of the languages he's learned are various languages of Europe. He's done like 20 or 30 by now. And so they have given him languages that differ from the languages that he's known to find out whether he can learn them just as quickly as languages that-- because they also get, when they do these experiments, they'll give him a language that does obey some rule of universal grammar and another language that doesn't. But you're absolutely right, that's something they have to watch out for. And all I know is that they have been careful about that. Whether they've been careful enough, I can't swear. Yeah, that's a good point. Christopher loves doing this, by the way. This is not a hardship they're imposing on Christopher. This is one of his big joys in life, is learning languages. Yeah, OK. [? Let me ?] back in? Yes. Yeah, questions about this? OK, all right. All right, so that's it for wh-questions, at least for now. Are there any wh-question questions before we put that aside and go on to something else? OK. We have spent a lot of time talking about the projection principle-- this idea that if you have a head, and it selects something, that thing needs to be its sister. Then, I've incautiously put the verb "put" in this example where, I don't know, it needs two sisters, or we need to say something else. Anyway, there are strict locality restrictions on the relationship between the head and the thing that it selects. So we have lexical entries for verbs like "devour," and, "put," and "faint," that say things like "'devour' needs a sister that's a noun phrase"-- and "'put' needs to combine with a noun phrase and also with a prepositional phrase," and "'faint' is intransitive." It had better not combine with anything at all. Yeah, those are all things that we've been saying. And what we've been saying is, if you have a head that doesn't select for a sister, well, then, it doesn't have a sister-- the end. So "he fainted," "faint," it's an intransitive verb. Doesn't need a sister, doesn't get one, and that's the end, OK? Unless you add an adjunct, but it doesn't get an argument. But there's nothing comparable in subject position. So when we talk about verbs being transitive or intransitive, we're talking about objects. So people have been talking about transitive and intransitive verbs, literally, for centuries. It's an old observation that there are verbs that need objects and verbs that don't need objects. But there's nothing comparable in subject position. That is, people don't talk about verbs that need subjects and verbs that don't need subjects. Verbs just always have subjects. So and sometimes, the subjects are a little sketchy. I mean, so we say things like "It rained," and it's not too clear what "it" is supposed to be. You can't be anything else. You don't get to say, "the sky rained," or "God rained," or whatever. It sort of needs to be "it." And there are other kinds of examples like that, too. So "It seems that John has died," if I say that to you, you're not going to say, "What seems that John has died? What is 'it'? What do you mean by 'it' there?" I don't mean anything at all. "It" is just there-- it's as though it's there so that the sentence can have a subject, even though the subject doesn't contribute anything to the meaning of the sentence. So yeah, if you compare-- here's a short, kind of sad story. So "It squeezed John, and it seems that John has died." Is anybody here named John? Good! [LAUGHTER] So I should probably have warned you about this early on. Example sentences in linguistics, I mean, they vary. Often, you need a name, and John is one of the classics. Mary is another one. So the first time I ever taught intro to linguistics, there was a John and a Mary in the class. It was very confining. So since there's no John, I can just let these example sentences rip. "It squeezed John. It seems that John has died." These two "it"s seem to have a different status. So with the first one, if I say, "It squeezed John," if I say that out of the blue, you're entitled to wonder, "What squeezed John?" That's a reasonable question. So it's some machine, or a snake, or something. Something squeezed John. But if I say, "it seems that John has died, " even if you don't care about John at all, your question is not going to be, "What seems that John has died?" That question doesn't make any sense. So these two "it"s seem to have a different status. There's a classic way of handling this kind of fact, which is to offer what's called the Extended Projection Principle, which-- yeah, so the Extended Projection Principle. So you know the Projection Principle, which says that heads, when they select, they need to select things that are their sisters, or they need their things they select to be close to them. The Extended Projection Principle says, there needs to be something here. So TP needs to have a specifier. You can't have a sentence that doesn't have a subject in it. Calling it the Extended Projection Principle, this was Noam Chomsky's idea. And so I feel secure in telling you that it's an obvious hack. The Extended Projection Principle has nothing to do with the Projection Principle. He really just called it that, I think, in order to paint a big red target on it. He's like, why the heck does it work this way, that there has to be something in the specifier of TP? Linguists, go and try to figure out what the heck is going on there. So the Extended Projection Principle says the specifier of TP-- this is a word I've used before, and I hope I've defined it, but if I haven't, let me define it for you-- the specifier of TP, there has to be something in the specifier of TP. What's the specifier of TP? It's a daughter of TP-- so a daughter of the maximum projection-- that doesn't have the label T-- so it's that thing that the red arrow is pointing to-- and it's not selected, or at least it doesn't have to be selected, by anything. So here's another place where there has to be something. If you have a transitive verb, the transitive verb has to have a sister, an object. If you have a verb that selects for something else, like "depends" selects for a prepositional phrase headed by "on", and it has to have that prepositional phrase. Here's another position. It doesn't seem to have anything to do with selection. It doesn't matter what the verb is. There just has to be something in subject position. So it's called the Extended Projection Principle. Calling it a principle is dignifying it too much. And what we're seeing is that in sentences where there isn't anything else to satisfy the Extended Projection Principle with, then you insert "it." You can insert "it." That's what you're doing in "It seems that John has died." That is, you have this subject position. There needs to be something there, and so you put this meaningless "it" there. This is why you can't ask questions like, "What seems that John has died?" Because there's nothing meaningful there. That meaningless version of "it" is called an expletive. There are other names for it, but that's what we usually call it. So it doesn't mean anything. It's just there to satisfy the EPP, the Extended Projection Principle. I think I may say this on a later slide. Yeah, we've just gone through saying languages vary in various ways. This is a place where languages vary. English has the Extended Projection Principle. There are other languages. French has the Extended Projection principle There are some others. But there are many languages-- in fact, maybe most languages-- that don't have this. So there's languages that are perfectly happy to start the sentence with the verb. So if you're working on a language, yeah, if you speak Spanish, then you're thinking to yourself, wait, it's fine to say things like, "Seems that John has died" in lots of languages of the world. So yeah, here's another point of cross-linguistic variation. There are languages like English that have this interesting restriction. OK, now here's another-- I'm telling you about the Extended Protection Principle because I want to introduce you to another kind of movement. So we've talked about movement, and I want to show you another kind. Here's another kind. We can say things like "The snake squeezed John," or "I put the kumquats in a bowl." And then, so let's concentrate on "The snake squeezed John." "Squeeze" looks like it's a transitive verb, or at least it has a life as a transitive verb. "John" is being selected by "squeeze." It's the thing that's getting squeezed. Or similarly, in "I put the kumquats in the bowl," "put" is a verb that selects for an object. It also selects for a prepositional phrase. That's what we've been saying. But then, there are these other forms, these passives-- "John was squeezed," and "The kumquats were put in a bowl." And here, we're in a similar situation to the situation we were in when I first showed you "What did Mary write?" where I was trying to say "What did Mary put in the bowl?" So with "What did Mary put in the bowl?" I was saying, yeah, "put," we think "put" has to combine with a noun phrase. It's a transitive verb. You can't say, "Mary put in the bowl." That's not a sentence. Why is it OK to have "put" with no noun phrase after it here? Well, it's because "what" started out here and moved over to there. For "the kumquats were put in a bowl," we're going to say something similar. "Put" is selecting for an object. In this case, it's the kumquats, and the kumquats are moving out of that position into another position. What's the position they're moving to? Well, they're not wh-moving. They're not wh-words. This isn't a question. This is another kind of movement. So the same reasoning that prompted this to posit wh-movement in order to save our beliefs about how selection works leads us to suspect that there's movement going on here as well. So we'll start off with "Was squeezed John" or "Were put kumquats in the bowl." And then, the EPP is going to say, oh, there's no subject. So what happens with the passive is that you get rid of the subject. "Was squeezed John," you've gotten rid of the snake that was squeezing John. Or "Were put in the bowl," "Were put the kumquats in the bowl," you've gotten rid of Mary who was putting the kumquats in the bowl. And then, the EPP says, oh, no, wait, we need something in subject position. And so we move-- in this case "John," or in the other case, "the kumquats"-- into subject positions. So there can be a subject. So the EPP can be satisfied. It's a way of talking about these kinds of examples. So this is a new kind of movement driven by the EPP. It's called-- sometimes called NP-movement. It's been called other things, but I'll call it NP-movement. It always seems to be movement of NPs. So you take an NP like "John," and you move it, in this case, out of the position as the sister of "squeeze," where it was selected, into subject position so that the sentence can have a subject, so that TP can have a specifier. It seems to be what's happening. Let me give you some reasons to take that analysis seriously. But first, is the analysis clear? That's what we think is happening. Oh, sorry. First, let me show you another kind of NP-movement. So those first ones were passives. Here's another one. So in "It seems that John is sick," there's an "it" at the beginning of the sentence which is the expletive kind of "it." So it's the kind of "it" that doesn't mean anything. I think it was one of the first kinds of expletives that I showed you. Can't say things like, "What seems that John is sick?" So that "q" doesn't mean anything. It's just there so that TP can have a specifier. "John" is starting off as the subject of "be sick." And in the first sentence, "John" is staying there in subject position for "be sick." But in the second sentence, "John" appears to be moving. So it's an analysis, and I'm going to give you some reasons to take this analysis seriously in a second. We're going to say, yeah, "John" is still the subject of "be sick" in the second example. But then, he moves to satisfy the EPP in the higher clause. So the higher verb needs to have a subject, and here, for some reason, we've elected not to merge an expletive. And so instead, we're moving "John" so that "John" can satisfy the EPP upstairs. Now, let me give you some reasons to take this analysis seriously. So here is an argument for the existence of NP-movement, and it involves idioms. So let's talk for a second about idioms. Idioms are always fun to talk about. Idioms are combinations of words that don't have to mean what they appear to literally mean. So "kick the bucket" and "buy the farm" are both idioms that mean "die." So if you say "John bought the farm"-- continuing to pick on John, since there are no Johns here-- if you say "John bought the farm," that could mean John purchased a farm. It has its literal meaning. But it can also mean, "John died." Or, "John kicked the bucket" can literally mean, you know, John kicked a bucket. Yeah, but it can also mean that John died. Do people have these idioms? Are these things that you can say? AUDIENCE: I've never heard "buy the farm." NORVIN RICHARDS: You've never heard "buy the farm"? Has anybody heard "buy the farm"? OK, all right. So I'm not just making this up. That's good to know. I just have to check for these things. There are many, many idioms that mean "die." "Kick the bucket" and "buy the farm" are two of them. "Spill the beans," just to get things slightly less violent, spill the beans" means to reveal a secret. "Yawn in Technicolor," is that an idiom you guys know? "Yawn in Technicolor?" No? I got this one from a book of UCLA slang. There was a period there-- I hope they still do this. I'm not sure they do. The UCLA linguistics department had an undergrad class where the undergrads would compile a dictionary of undergrad slang, and they sold the dictionary. And you could buy the dictionary in the university bookstore. I bought a copy. There were many, many-- the UCLA Slang Dictionary is an interesting document. There are lots and lots of idioms that have to do with being drunk or high. All the idioms, the MIT idioms that I have to do with having too much work to do. [LAUGHTER] So I don't know if that's true, but I get that feeling. So yeah, the MIT slang dictionary, if there were an undergrad class like that here, I think it would be a less entertaining read. But I could be wrong. Anyway, that's where I learned this one. I once talked about this idiom in front of some Australians, and they were all like, oh, yeah, that's an Australian idiom. So it's possible that that's where it came to UCLA, that there were Australian students who were talking about that. Anyway, "yawn in Technicolor"-- impress your friends. Add this to your vocabulary, if it ever comes up. There are many, many idioms. So yeah, people are clear on what idioms are? So these are strings of words or structures that have literal meanings, I guess-- "yawn in Technicolor," maybe not so much-- but also have these non-literal meanings. And the non-literal meanings are quite-- what? Fragile, I guess. So "kick the bucket" means die. It also has a literal meaning where you punt a pail. But you can't change "kick" or "bucket" with a synonym and still mean "die." So if he "punts the bucket," that doesn't mean he died. It just means he literally kicked the bucket, right? Or if he "kicks a pail," if he "kicked the pail," that doesn't mean he died. So it has to be these words to have it have its idiomatic meaning. Idioms are fun. If you get a chance, you can ask your language consultant whether they can think of any idioms that you can think about. Now, here's the thing. There are a zillion idioms like these where the idiom consists of a verb and some phrase that the verb combines with. So a verb and its object, like in the first three examples, or a verb in prepositional phrase like in the last example-- lots and lots of idioms like that. So to say that that's the idiom is to say that if you have this idiom in a sentence-- so if you say something like, "John kicked the bucket," the idiomatic part of that sentence is "kick the bucket," right? "John" is not part of the idiom. Anybody can kick the bucket. In fact, all of us will someday. [LAUGHTER] Sorry, not to be depressing or anything. So the subject can be anything at all. It's the verb and the object that make the whole thing an idiom, yeah? And then there's the non-idiomatic part that you add to the idiom to make a sentence. Does that make sense? So idioms are typically parts of sentences that combine with other things to make complete sentences, not other non-idiomatic stuff. But here's the thing-- there are restrictions on idioms. There are no idioms that consist of a subject and a verb where the object can be anything. So I'm just giving this as an example. It's always hard to exemplify this because, of course, most strings of words are not idioms. But the last slide showed you, there are a zillion idioms where the verb and the object are an idiom-- "kick the bucket," "buy the farm"-- but there are no idioms where it's the subject and the verb. So here's an example of a non-idiom. There isn't an idiom, "the armadillo bit (blank)." So you can imagine an idiom where if I said, "The armadillo bit John yesterday," it meant "John was very busy yesterday" or something. Or, "The armadillo bit me this morning" means I have a headache, or something. You can imagine it meaning something like that. But that is not an idiom. And more importantly, there are no idioms like that. Yes? AUDIENCE: What about like "The cat got your tongue"? NORVIN RICHARDS: Well, that's a nice example. Notice, though, that the object is part of the idiom, right? So actually, "Cat got (blank)'s tongue." Actually, can the possessor of the tongue be anyone at all? Can I say, "The cat got John's tongue?" Meaning John had a hard time thinking of what to say? I think the easiest way for me to use this idiom is to ask someone, "Cat got your tongue?" If I have a hard time coming up with something to say, can I say "The cat got my tongue?" So there may or may not be a blank here. Maybe there should be "or" here. But anyway, the subject is indeed part of the idiom, but so is the object. So this is a nice-- you're making me refine what I just said, which is important. It's not the idioms can't contain the subject. They can. This is a nice example. But they also have to contain the object. Yes? What if [INAUDIBLE] something like "his tongue." The cat got his tongue-- can you say that? AUDIENCE: [INAUDIBLE]. AUDIENCE: [INAUDIBLE]. AUDIENCE: Yeah. NORVIN RICHARDS: Yeah, my son-- AUDIENCE: That's when he's like, what's wrong with this? NORVIN RICHARDS: Cat got his tongue. My son, who's 11, is shy, and he's having a hard time saying anything in front of people he doesn't know. And I say, "Oh, the cat got his tongue." AUDIENCE: [INAUDIBLE]. NORVIN RICHARDS: Maybe, yeah, yeah, yeah. So and this is why I waffled about whether there was a blank here. The fact that there can be a blank here is interesting. But the point is-- there are several points to make. But one is, look, "tongue" is also part of this idiom. So the object is part of the idiom. You're absolutely right that you might have some options about what to put here. Raquel? AUDIENCE: I feel like if you just have a, quote, "idiom," it's just like a verb. NORVIN RICHARDS: Yep. AUDIENCE: And then, if you just say that that's an alternative meaning, or like a slang meaning of that verb, like "flame," like, get really angry-- NORVIN RICHARDS: Flame. AUDIENCE: --or something like that. Yeah, people don't call it an idiom. They just say, oh, yeah, "flame" is just [INAUDIBLE].. NORVIN RICHARDS: That's a nice example. So maybe this is a way to think about this. If you're thinking about a tree here-- we'll put a tree here. So what we're seeing is there are idioms-- I started by saying there idioms like this where this whole thing is the idiom-- the verb and a noun phrase after it like, "buy the farm." Raquel is raising the possibility that there could be idioms that are just this size, although, as you say, that's not usually what you call them. So maybe "flame," where the subject and the object are not part of that idiom. So I can flame you, you can flame me, anyone can flame anyone. Those are not part of the idiom. The idiom is just the verb, "flame." Yeah, so yeah, we could call that an idiom or not. And then, earlier pointed out, "cat got your tongue," there are bigger idioms where the subject and the verb and the object are all part of the idiom. And then, there's questions about how do you get a possessor into here? I'm trying to think about that. So a way to talk about this, then, is to say, yeah, idioms-- but the point that I'm making-- attempting to make-- is there aren't any idioms that are just this. I wish I had colored chalk or something. There aren't any idioms that are just the subject and the verb, and the object can be anything at all. There's no "The armadillo bit John." There's no "The kumquats have fooled Mary," right? Where "The kumquats have fooled Mary" means Mary is daydreaming. There aren't idioms like that. Yeah? AUDIENCE: So is it basically like it has to be [INAUDIBLE]?? NORVIN RICHARDS: It's looking like it, right? Yeah, something like that. Although, how you're going to get "his" or "your" here in "the cat got your tongue," that's an interesting question, because that's presumably part of this noun phrase. But sweeping that potential problem under the rug, yeah, it's just, [? those ?] idioms have to be constituents, yeah. Yeah? OK? I'm pausing here because here I am boldly asserting there is no-- of course, there is no idiom, "the armadillo bit." But if you pick any random verb and noun, it's probably not an idiom. So there also isn't an idiom, "bit the armadillo," right? That could have been an idiom, but it isn't. So "John bit the armadillo last night" that doesn't mean "John worked hard last night." Yeah, it could have, but it doesn't. So claims of this kind are always a little weird. You should all go home and try to think, annoy your friends, and your relatives, and your roommates trying to come up with idioms that have this shape. But there don't seem to be any. Yeah? OK, good. I'm going to move quickly to the next slide before somebody comes up with one. [LAUGHTER] This isn't because the subject can't be part of the idiom. This has already come up. So "The cat got your tongue," "The cat is out of the bag," "The shit will hit the fan," there are lots of idioms. So "The shit will hit the fan" has a literal meaning. [LAUGHTER] But it also has a figurative meaning, like the situation will get dramatically worse quickly, or something like that. That's what it means. "The cat is out of the bag," yeah, there could be a literal cat in a literal bag, or it could mean the secret is out. That's the other thing it can mean. There are many idioms about cats for some reason. So yeah, there are idioms like "John will buy the farm." There are idioms like "The shit will hit the fan." Yeah, that's what we've seen. And there are no idioms like "The armadillo will bite John." Good. Maybe it's as though idioms must be constituents. Yeah, nicely, nicely pointed out. So OK, good, now we know something about idioms. So "The shit hit the fan," it's a fine idiom. "The shit seemed to hit the fan," "The shit seemed to be likely to hit the fan," these are all fine, and they all have idiomatic readings. But the things that I have circled are surely not constituents, right? So "The shit hit the fan" is a constituent, but "The shit seemed to hit the fan"-- "seem to" is not part of the idiom. These sentences are OK, and they have their idiomatic meanings. "Everything was fine all day until suddenly, at 12:35, the shit seemed to hit the fan"-- fine. Sad, of course, but grammatical. But "seem to" is not part of the idiom. It's like John in "John bought the farm." The idiom is, "The shit hit the fan." Or "The shit hit the fan" in "The shit seemed to be likely to hit the fan." So "I walked into the office, and everything seemed very precarious. Tempers were high. The shit seemed to be likely to hit the fan at any minute"-- yeah, perfectly fine. Has the idiomatic reading. But these things are quite far apart from each other. But if we believe in NP-movement, then we have a story. The story goes like this. How do you say, "The shit seemed to hit the fan"? Well, yeah, "the shit" and "hit the fan" are far apart from each other when you're done, but you start off with "seemed, the shit to hit the fan," and then "the shit" moved into the specifier of TP in the higher clause. That subject moves up to become the subject of the higher thing because of the EPP. The EPP wants there to be a subject up there. Another option would be to put an expletive there and say, "It seemed that the shit had hit the fan," or whatever-- to put an "it" there. But if you're not going to do that, you need to move the subject of the lower clause up into the higher clause. The consequence of that is that by the time you're done, "The shit hit the fan" is no longer a constituent. But it was once, and apparently, that's what matters. It needs to start as a constituent. Maybe one way to think about it would be, the idiom is a really big and complicated lexical item. If you're going to merge it, you need to merge it all at once. It's one way to think about it, anyway. Yeah? AUDIENCE: Yeah, question. NORVIN RICHARDS: Yeah. AUDIENCE: So first of all, what would happen if you wanted to say, "The shit was going to hit the fan?" NORVIN RICHARDS: Oh, good question. So "The shit hit the fan." "The shit--" very nice point. "The shit will hit the fan." "The shit might hit the fan." "The shit is going to hit the fan" Yeah, there are lots of things you can put in here. There is-- that a very nice point. There is a classic response to this point, which is to say, yeah, what you've discovered is there's even more NP-movement than we thought. So if I-- let me get a fresh board down here. How are we doing for time? Yeah, this would be a good place to stop, probably. The way I have been drawing trees for you, I've been drawing trees like, "John will eat the kumquats." I've been drawing trees like this, yeah? Does this look familiar? Does anybody have any questions about this tree? This should be painless at this point. If it's painful, then you should stop me. Is this OK? So we've got a verb phrase that consists of a verb and an object, "eat the kumquats." "Eat" is selecting for the object. And then we've said that words like "will" are instances of T, tense, and that T is the phrase, whose specifier, the subjects it's in-- actually, we have a name now for the fact that the subject sits there. That's the Extended Projection Principle. AUDIENCE: Yeah. NORVIN RICHARDS: And we've said there is some force that demands that there be something here. That's part of why John was here. Your point, which is a very well-taken one, is that if I say, "The shit will hit the fan--" remind me to erase this before we leave-- I've been going quickly partly in the hope that nobody would notice this, but yes, problem. This and this seemed to be the constituent, the idiom, but "will" is not part of the idiom. And there's a classic way of dealing with this problem. This is one of the classic arguments for something that is now widely believed, which is that the subject, although it is in the specifier of TP, it doesn't actually start there. I've been drawing trees where it starts there, but it actually starts lower. It starts inside the verb phrase, and it moves up to here. AUDIENCE: [INAUDIBLE]? NORVIN RICHARDS: Yeah, so that the real derivation for this-- this is what I've been concealing from you-- but the real derivation from this starts with the subject inside the verb phrase-- I would say it's a specifier of the verb phrase-- and it is raising to the specifier of TP always. So I've been talking so far as though the specifier or TP-- we've talked now about some cases where something moves into the specifier of TP. But from the earlier kinds of trees I was drawing for you, I was making it sound like the specifier of TP, sometimes, something is just merged there. It doesn't move there. For exactly the reason that you've just pointed out, people think that that's wrong-- that really, what's happening is that the subject always starts lower than TP and raises into the specifier of TP. And in English, it has to raise into the specifier of TP because of the EPP. And this argument from the behavior of idioms, this is one of the classic arguments for that conclusion, yeah. This is something that was discovered in the late '80s people first started saying that. It's called the VP internal subject hypothesis. And by now, it's very widely assumed to be true. Yeah, I was planning to conceal this from you because this is 24.900, but I should know better than to try to conceal things from you guys. Yeah, does that answer your question? AUDIENCE: Yeah. NORVIN RICHARDS: Cool. AUDIENCE: Is this an example about knowledge theory, where it's like we wanted idiom to be a constituent, but it matters more to satisfy the EPP? NORVIN RICHARDS: No. I see what you mean. The way people standardly think about this, anyway, it's something more like-- you're raising an interesting question. It would be interesting to think of ways of distinguishing these theories from each other. The way people think about this, it's something more like-- it's sort of-- mm. Remember the Projection Principle? So in the Projection Principle, it says the verb absolutely, absolutely has to have a sister if it's transitive, let's say. It has to have a noun phrase as its sister. And then, there are these other forces, things like wh-movement that say, oh, but wait, I want this thing to be over here instead. And so we say, yeah, it starts here, and then it moves. So it's as though there are these mutually incompatible requirements which are met by having a derivation, where something starts in one position and then moves to another position. And the idea is that the Projection Principle is satisfied because, well, you satisfied it first, at the beginning, and then later you did other things. We're going to talk more-- I think I promised that we would only do another day or two of syntax, so I'm running out of room in which to promise that we will talk about things. But I think we'll have a chance to talk more about the nature of the derivation and the order in which you do things, like the order in which you have to satisfy things. But the standard way people talk about this kind of thing is more like the rule-ordering kinds of things we talked about. It's like there is an early stage at which you must obey the Projection Principle, or at which you must obey this condition on idioms, and then there are other things that come along and mess everything up subsequently. That's a way people talk about this. Any questions about idioms, about constituents? Anything else? NP-movement? Awesome. So let me make sure-- yes, this is an excellent place to stop. So let me ask again, are there any other questions? Or shall we just stop? Cool, good. Now you know slightly more than I was planning to teach you about the behavior of subjects. And we'll pick it up here next time. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_2_Morphology_Part_1.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN W. RICHARDS: So OK, problem set 0, which some of you have already handed in-- it's not due. Don't panic if you haven't handed it in yet. It's partly a getting to know you problem set. We're asking you why you're here and what you're hoping to get out of the class so we can try to tailor the class to your interests. But one of the other things we ask you to do is to get started on the process of finding somebody to do field work with, so somebody who speaks a language that you don't speak and have never studied, that you're going to spend the semester harassing. So you'll be meeting with this person every so often to ask them questions about how to do things in their language. And so you'll have assignments, problem sets, in which we'll ask you to find out this, or that, or the other thing about your language. So what I wanted to start with today was to talk a little bit about how to do that because it's not obvious. And you'll get better at it as the semester goes along. That's one of the reasons we're asking you to do it. So in order to talk about how to do this, I'm going to make up a language, sort of a fictional language. I thought about how to demonstrate to you how to do this kind of work. And I thought about using an actual language. But of course, at this point, I don't know what languages you folks are going to work on. And so there's a danger that I might accidentally use a language that somebody is using. So instead, I'm going to work on Martian. So pretend that I have found a Martian to interview. And we're doing Martian fieldwork. The first thing I like to do if I'm working with an unfamiliar language is to ask the person to-- or the alien, if it's an alien-- to give me a name. That's partly to make it simpler for you. Pretty soon, you're going to move on to asking about sentences. And if the sentences have a name in them, then at least you know that word in the sentence. That wipes out one piece of the puzzle for you. So I asked my Martian for a popular Martian name, and he gave me the name X!oo. Now, we haven't yet talked about how to transcribe strange sounds. So when you are writing down things in your language, we'll talk about this on the problem set, but there are at least two things you can do. If your language is standardly written in the Roman alphabet, then you should feel free to just write things in the alphabet of the language. So if you're working with a language that uses the Roman alphabet, just go ahead and use it. If you're working with language that doesn't use the Roman alphabet, then you can make something up. And you can ask the person you're working with if there's a standard way to write their language in the Roman alphabet. That's sometimes something that happens. But if not-- so in this example, the Martians don't use the Roman alphabet. I asked the guy for a name. He said, "X!oo." I was, like, OK, I'll use X, exclamation point for that clicky noise at the beginning. And we'll go on. Later, as I said, we'll be talking about how to think about how to write strange sounds, unfamiliar sounds, sounds that aren't contained in the languages that you're familiar with, maybe. So we'll be talking about the International Phonetic Alphabet, which is a system for transcribing speech sounds. But for now, since we haven't done that yet, we'll just make stuff up so. There's a Martian name, X!oo. And then, you get started on sentences. And you might want to start with simple sentences, like "X!oo is a linguist." I got that one for Martian, "X!oo kuulduud bii." And that's a sentence. I asked my Martian, how many words are in that sentence? And they told me there are three. And you might wonder-- well, I started with a name partly to make my life simpler. I know what the first word means, but what do the second and third words mean? And there are various ways to do that as you're working with the person that you're working with. You could just straight up ask them things like, "Which word in here is the word for "linguist?" You could just say, "What's the word for linguist?" Or you could ask for another sentence that's minimally different from this one, just in, say, the word for linguist. So ask, "How do you say X!oo is a physicist?" So you get that. You discover that this sentence consists of three words, "X!oo," and then the word for linguist, which is "kuulduud," and the word for is, which is "bii." What I've done here is to present this piece of data, this sentence with the three words that have these three meanings, in the way that we're going to ask you to present the data. And we'll again, on the first problem set, will reiterate this. But this is the standard way you represent data in linguistics. You'll have a sentence in the language of interest. And under that sentence, under each word, you'll have a translation for that word. So that's called the gloss. So the gloss for "kuulduud" here is "linguist." And then on the third line, you have a translation of the entire sentence into English. There's more to say, but for now that's what you need to know. Does that make sense? One of the things you do by doing it this way is it's an easy way to get across the fact that the word order of Martian is not the same as the word order of English, since the Martian sentence ends with the Martian word for "is." We're learning something about Martian word order. Moving on, you get some more Martian sentences. "X!oo amsterdam dig dug," so "X!oo dug a canal," "X!oo is digging a canal," and so on. So just depending on what we've asked you to find out, you that's going to guide what particular sentences you might ask for. Probably what we'll do is ask you to get some basic sentences in the language. And you'll want to have some sentences that have subjects and objects, like "X!oo dug a canal." One of the first things we'll ask you to do is to find out how to negate sentences, so how to say that something is not true. So in this particular case, what I did was to get three Martian sentences, and then to get minimally different sentences in which we negated things. So we're not going to say "X!oo is a linguist," we'll say "X!oo isn't a linguist." And that changes it from "X!oo kuulduud bii" to "X!oo kuulduud noowee." Or the way you do "X!oo didn't dig a canal" is "X!oo amsterdam digwedug." So we've changed the verb "digdug," which means "dug," to "digwedug." That makes it negative. So this is a way of gathering, bit by bit, bits of Martian data. So in particular here, we're interested in Martian negation. And we're finding things out about how negation is expressed. So a hypothesis I came up with on the basis of these facts looks as though the negative of "is" is just irregular. So "bii" is "is," but "noowee" is "is not." Those don't look related to each other at all. That just looks like an irregular correspondence, but that for "dig" changed to "digwedug," and "gudgid" changed to "gudwegid." And that makes me think, there's a "we" that goes in the middle of those words somewhere. So for "digdug," we got "digwedug," and for "gudgid," we got "gudwegid." And at this point, we might have a number of hypotheses about where "we" goes. I've put a couple of them up here. Sorry, I should have made you guys generate the hypotheses. Maybe we could think that "we" goes after the first syllable, so you start with "digdug" and you get "digwedug." Or maybe that it goes before the last syllable, so you start with "digdug," and you get "digwedug." That covers those two bits of data. Anybody have another hypothesis they want to float about where "we" goes? Yeah. STUDENT: I'm not actually sure-- did you just completely reverse the order? NORVIN W. RICHARDS: Yeah, to get from "digdug" to "gudgid," yeah, to get from the past into the present. Yes. But I'm asking you to give me hypotheses about negation right now. But yes, you're right. That's a hypothesis about how tense works. The tense involves saying words backwards. Yeah. And we would want to find out whether that was true. It would be exciting if that were true because there are not human languages that work that way. But of course, this is Martian, so all bets are off, I guess. Yeah. STUDENT: In the exact middle of the word. NORVIN W. RICHARDS: Yeah, so count letters, and these are six-letter words, and so you put "we" between letters three and four. It could be that. Any theories people want to push? Yeah. STUDENT: It could also be like in between syllables. NORVIN W. RICHARDS: Yeah. STUDENT: What is it that's-- I would ask-- in that case, I would ask for-- I would try to find a six-letter word that has-- that looks like a 4-2. NORVIN W. RICHARDS: Yeah, I see what you mean. So I just said you're counting letters, but you're absolutely right-- we could be counting syllables. These are two-syllable words, and you put it between the first and the second syllable. Yeah, that's absolutely right. You guys have collaborated on an excellent hypothesis there. Yeah, another thing? Yeah. STUDENT: You're also using the same verb for both of these situations. [INAUDIBLE] NORVIN W. RICHARDS: Could be, yeah. So I took the flying leap of faith that yeah, there was something special about "bii." But maybe "dig" is something that's going to illustrate a general pattern, but you're absolutely right. Could be this is also irregular. Clearly what we need is more verbs. Yeah. STUDENT: Language [INAUDIBLE]. NORVIN W. RICHARDS: Oh, that's a nice thought. So yeah, maybe "we" goes after the stress or before the stress. Yeah, that's cool. My Martian accent is not so great. So we haven't been able to work on that. But yeah, nice. These are nice ideas. So what we need to do is find more words. How, for example, are we going to distinguish the two hypotheses that I have on the board here-- so after the first syllable or before the last syllable? Yeah. STUDENT: I want to get a word with three syllables. NORVIN W. RICHARDS: Yeah, I want to get a longer word. So if we have a longer, word we'll be able to find out what that is. This was your point, too. If we have a word where one syllable is really long, another syllable is really short, maybe we can learn something by doing that, so get some more words. Here's "X!oo is singing," "X!oo yodeleehihuu," and "X!oo destroyed a spacecraft," "X!oo roovaa munchmunchyum." So here are two verbs that are longer. And so we'll be able to tell, by negating these, which of the various hypotheses we've been floating are the most attractive ones. Turns out we get "yowedeleehihuu" and "munchwemunchyum." So you're right, I'm not saying it right. So what's the rule? Where does "we" go? STUDENT: After the first consonant. NORVIN W. RICHARDS: After the first consonant, looks like. Or if stress is on the first syllable, it could still be after the stress. We have to study that. That's absolutely right. So now, we have a working hypothesis. We could go get more verbs and test the hypothesis against-- if we could find a verb that had maybe all of these have initial stress, and if we work harder, we'll find a verb that has final stress. And we'll learn something about whether stress matters. Yeah, basically, I'm asking you to do science. So you're going to gather data, generate a hypothesis about the data, realize that there are multiple hypotheses that are compatible with the data, try to figure out what kind of data you'd like to have that allow you to test your hypothesis further, go gather those data. That's the idea. Any questions about that so far? That's the goal. A couple of general things-- maybe I should start with the last one. You should be very nice to this person, the person that you're working with. We're asking you to do some work for you. But the Linguistics Department isn't offering to pay them, for example. We're just hoping that you will be nice to them. So we encourage you to treat them the way you would treat anybody who is helping you a lot, which hopefully, is nicely. Find out what their favorite food is. Bring it to them. Offer to be nice to them in whatever way they need niceness. It's maybe important to emphasize that, not just because these people aren't being paid, but also because this is something I have-- see, I teach a Field Methods class for graduate students. And this kind of work, where you're working with a human being, is pretty different from, well, a lot of the kinds of scientific work that you do, and even a lot of the kinds of scientific work that linguists do. Often, when we're gathering data about a language, we're doing it by reading your grammar, or by talking to a linguist, which is not quite the same as talking to a human being. Linguists-- we can take shortcuts. We know how to talk to each other in linguistic code and to get quickly to what it is that we want. If you're talking to a normal human being, it's especially important to be nice, to remind yourself to be nice. Because it's easy to find yourself irritated with this person when they disprove your hypotheses. So if you've got a hypothesis, and the person then gives you data that disprove your hypothesis-- it's something I often see people do in class. They try to talk the person out of it. They're, like, "No, wait. Are you sure?" So don't do that. Just gather the data. Don't get emotionally attached to your hypotheses. Be happy when you see your hypotheses die and get replaced by other hypotheses. You should try-- I guess I'm going through this list backwards-- you should try to be organized. And by that what I mean is this kind of work, it involves being fast on your feet to a certain extent. So when I was doing that first bit of Martian data, it's like, OK, there's a "we" that indicates negation. And it's going in the middle of the verb somewhere. But what exactly do I mean by "middle of the verb?" And then we were generating various hypotheses about what that meant. And that makes it clear what you need to do next, which is to get more verbs. That's something that you discover in the middle of the session. You find out how negation works. Oh, it goes in the middle of the verb. Then what I need is more verbs so that I can define middle in the appropriate way. So you want to go into the session with a plan for what exactly you're going to try to find out and in what order. And then two things-- my plans often look like flowcharts, or choose your own adventure novels, or something like that. It's like I will ask this, and if they say yes, then I go over here. And if they say no, I go over here. If they say no to that, well, then we go over here. So you make yourself a set of ways of dealing with various possible things that they might say. and then the other thing you do is not get too attached to that, either. So they may say something you didn't imagine them saying. And you have to be ready to think on your feet, be ready to do whatever it is that they say. So when I say be organized, what I mean is, think ahead about what you will do under various circumstances. But then also, be quick thinking. And that's something you'll get better at as you do more and more of it. Don't assume that you're getting what you're asking for. What do I mean by that? I've done some field work on a language called Lardil, which is an Aboriginal language spoken on Mornington Island, which is a beautiful island off the northern coast of Australia. So if you look at the map of Australia, there's a big dent in the northern coast. That's the Gulf of Carpentaria, and Mornington Island is in the middle of that. Lardil is spoken by Aborigines who have lived there for thousands of years. I was once walking with a Lardil speaker and talking with him. And we walked by a truck that was parked in front of a house. And I asked him, hey, how do you say, "The truck is in front of the house?" And he thought about that for a second, and he said something to me in Lardil which meant, "The truck is to the south of the house," because in Lardil, that's what you do. You talk about spatial relations entirely in terms of cardinal directions. It's not that Lardil doesn't have a word for it "in front of," but it would have been unnatural in Lardil to say, "The truck is in front of the house." So he didn't answer my question, in a sense. And in another sense, he did. He told me what a Lardil person would say to describe what I wanted them to describe. That may happen to you. So you may ask somebody how to say something, and they may tell you something which is not exactly what you meant. There sort of isn't any way to prepare for that. You just have to do the best you can. So find out everything about the parts of what they've asked. So as we started off, I said, "X!oo kuulduud bii." That means, "X!oo is a linguist." And then you do some work to find out what does each word mean. And as you do that work, you may be able to avoid some of these kinds of problems. So there's a recording-- oh, yeah. STUDENT: What if-- not that they don't necessarily have a word for something, but it's the English word that they just don't have another-- NORVIN W. RICHARDS: You should probably try to find something else, then. That's a very good point. So you should do things that are culturally appropriate. I like to start, often, by asking people what their favorite food is. And then, if they tell you, "My favorite food is apples," then all your sentences can be about apples. "I'm eating apples," or whatever. Yeah, that's a very good point. Yes. STUDENT: We have to verify [INAUDIBLE].... So for example, in your example, if you have a hypothesis that no goes after the first syllable, does it make sense to [INAUDIBLE]???? NORVIN W. RICHARDS: Oh, it absolutely does. So let me just repeat your question. Does it make sense to ask the native speaker? I've got this hypothesis, now, about what's going on. It looks like "we" goes after the first syllable. It absolutely does make sense to ask and it's totally fine to ask. You shouldn't necessarily assume that they're right when they tell you. Because-- this came up when we were talking last week-- you know many things about your native language that you don't have to think about in order to speak it. And the consequence of that is that if somebody suddenly asks you, hey, what's the rule for this, you might not actually know. Because it's effortless for you. And so you may get a good response when you ask that question and you may not. That means that there are two kinds of possible mistakes that you could make when somebody tells you their theory about what's going on. One is to uncritically believe it because they're a native speaker. And the other would be to discard it. You see linguists do that, too. They're, like, "I'm the linguist here. I'm not going to listen to this person." That's also a mistake because of course, their input on this is valuable. It's just one valuable source of information, which you need to weigh against any other sources of information that you have. Does that make sense? It's a really good question. Sorry, I have one other anecdote about Lardil. I found a recording, one of the first recordings of Lardil, by an aircraft pilot. So he wasn't a linguist. He flew regular roots to Mornington Island, this really isolated place, and he got interested in the language. And he sat down a Lardil speaker, and he was going to record a word list. He'd gotten hold of a list of basic vocabulary. And so he was going to try to get the person to record translations for all these basic English words. And his list clearly started with pronouns. So he was going to list a bunch of pronouns. So he started with "I," said, tell me the word for "I." And the guy gave him the word for what you see with, this kind of eye. Then he said the word for "thou." And there's this pause on the tape. You can hear the Lardil speaking, "thou, what the heck?" And then he gives him the third person singular, the word for "he or she," which is actually a pretty good answer because in Lardil, you use the third person "he or she" under certain cultural circumstances, to indicate respect for the person that you're speaking to. You speak to them as though they're in the third person instead of the second. So you don't say "you" to them, you say "he" or "she." So he gave him a fancy word for "you," which is a pretty good translation for that whole action. But these are just examples of you may not get what you ask for, necessarily. You should be prepared for that. Now we're up to the second sentence on this slide. So start with simple culturally appropriate sentences. This is kind of what you were asking about. So by "culturally appropriate," I mean if you're talking to someone from a tropical climate, don't make all of your sentences about snow and ice. If you're talking to someone from one religion, don't make references to another religion in your questions. In fact, maybe try to avoid references to religion in your questions, period-- basic stuff like that. Like I say, I like to start by asking them their favorite food. And then we ask lots of questions about X!oo ate this or X!oo is cooking that and things like that. Sometimes, you make discoveries about what counts as culturally appropriate. I once taught a Field Methods class on a Semitic language from Ethiopia called Chaha. I started the class. I was like, OK, I'm going to get a basic sentence with a subject, and an object, and a verb. How do you say, "The man cooked the meat." The guy said, "I cannot say that." I'm, like, OK, why? He said, "Men do not cook. And also, you do not cook meat," he said. "You're supposed to eat meat raw, unless there's something wrong with it." I said, "OK, 'The woman cooked the cabbage.'" He's, like, "Yes, we can say that." We started with that. Whoops, I thought I told you not to do that, computer. And then, going up to the first bullet point-- I should work on this slide a little more-- one of the things you want to do, and this may be hard, is to try to convince the person you're working with that you are really, really interested in how they actually talk. Depending on the language, a lot of people are taught by the educational systems that they grew up with that there is a correct way to speak their language. And sometimes, they're taught that their way of speaking it is not the correct one, that they're bad at their native language. That's distressingly common. I do a fair amount of work on Tagalog, which is the language from the Philippines. And Tagalog speakers often, when I'm first starting to work with them, they'll start by telling me that they're not very good at Tagalog, which is their native language. They grew up speaking Tagalog. But what they mean is they've been taught that real Tagalog doesn't have any borrowings from Spanish or English, that real Tagalog is the Tagalog that's spoken in isolated villages that they've never been to, where they speak the real, proper kind of Tagalog, which very few people speak. And the people that I'm working with don't speak that and they feel bad about that. So you have to make some attempt to convince a person like that that their version of the language is what you're really interested in. And maybe you can tell them, I'm doing science. I'm not here to judge you. I'm not trying to find out whether you speak correctly or not. I'm trying to find out how you speak. And so I promise not to send a note to your parents or to your teachers in high school reporting on what you tell me. That's not how this works. This is I'm trying to understand how people actually talk. That's what linguistics is about. It's not about the rules that you learn in high school, which sometimes describe what you actually do and sometimes don't. Does that make sense? These are all things you may want to do. All right, well, then, today-- where are we-- today, we're going to get started on morphology. That's our first topic. And so things that we will get to, depending on how far we get today-- oh, yes. STUDENT: Just as a question, is it appropriate to hypothesize about paradigms or just one element of [INAUDIBLE]? NORVIN W. RICHARDS: I'm sorry, can you say more about that? STUDENT: So if you're trying to create, for example, a chart of pronouns, is that a good thing to do or should you just focus on one word or what? NORVIN W. RICHARDS: Oh, I see. So in the problem set, we will ask you to find out something in particular. So we'll ask you-- we'll have something that we want you to find out. So that would be the thing to concentrate on. In general, if you're working on a language, there kind of aren't guidelines about what it's OK to work on and what it's not. So I'd encourage you, if you're working with this person and you discover that you've answered the question we've asked you to answer quickly in the session, then one of the things you can do-- part of being prepared-- is to do some thinking about other things you'd like to know. And that could be-- one way is to sit down and get a paradigm of pronouns, or paradigm of verbs in the present tense, or whatever. And of course, depending on the language you're working on, something like getting a paradigm of verbs in the present tense could be trivial or it could be impossible. So you'll just have to find out what kind of language you're working on. Yes. STUDENT: So kind of two questions. One, when you say that you want how people really speak, if they're incorporating a slang, is that something that you want to just pick up or do you want to kind of tell them that [INAUDIBLE]?? NORVIN W. RICHARDS: Oh. So basically, you want to write down everything that they tell you. So if somebody says a sentence, and then they say, oh, but that's actually slang. Here's the way you'd say it without slang. You want to write down both of those sentences. It's all data. You're going to try to find everything else out. It could be worth it to try to find out, when you say this as slang, what do you mean exactly? Because that could mean this is only used by teenagers, and everybody is expected to grow out of this, and my mother would wash my mouth out with soap if she heard me speaking this way. It could be that. Or it could be-- so in the Philippines, I have people tell me, oh, this is slang. And what they mean is it's not the Tagalog word for it. It's the Spanish word for it. And everyone uses the Spanish word for it all the time. It's just that we feel guilty about it. And so you'll have to find out which of those situations you're in. It would be surprising if you could find it out on day one. So for day one, probably what you want to do is write down both versions of the sentence, and maybe try to ask about what they mean exactly when they say it's slang. Does that make sense? STUDENT: Yeah. NORVIN W. RICHARDS: So you may indeed get someone who tells you, here's how you say this. Oh, but it's a bad way. And you have to try to find out what they mean. But if they just mean my high school English teacher would be displeased, my high school whatever teacher would be displeased if they heard me say that, then that's still data. We're not passing judgment on things like this. Yeah. And then the other question? STUDENT: I just have another question about when you said, like men cooking, and how they like [INAUDIBLE]. NORVIN W. RICHARDS: Can't say that, yeah. STUDENT: I was wondering if you were ever in a situation where you just asked, oh, well, hypothetically, how would you say that or-- NORVIN W. RICHARDS: So in that particular situation, which was day one of the Field Methods class, I decided not to have that particular fight. So we found out how to use the verb "to cook." And we found out the word for "man." And by the end, we probably could have figured out how to say, "The man cooked the meat." But that's another place for not being judgmental about the person you're working with, I guess. If that's what they think, then fine. We're not going to try to talk them out of that. We can, but not while we're doing linguistics. Other questions about that? Those are good questions. OK. So morphology, then-- so questions like what do you know when you know a word? What's universal? What's learned? And why is the word "unlockable" ambiguous? We'll talk about that, too. So this is a cat. Any questions? It's clear so far. So the word "cat" refers to certain kinds of things, including the thing that I've got a picture of here. Now, I said, one of the things we're going to talk about is what things about language do you have to learn when you learn your native language, or any language? And what other things are universal? So we talked a little bit last time about the hypothesis that our brains are set up in such a way that we can only create language in some ways, and not others. And one of the ways we find out about those aspects of our brains is by finding things, surprising things, that are true of every language that we know anything about-- so "universals," we call those things, things that are just always true. Is it always true in every language that this is called a cat? No, this is a lousy candidate for a universal. There are a zillion words for cat out there in the world. Here are some of them. The famous linguist Saussure who had a French phrase which has been translated in English as the arbitrariness of the sign as the name for this phenomenon. The fact that this is called a cat in English doesn't follow from anything else about English or about any other language. And if you're going to learn a language, you've got to learn, assuming that language has a word for cat, you've got to learn that language's word for cat. Questions about that? Arbitrariness of the sign-- quick sidebar about that-- of course, there are places where the arbitrariness of the sign seems not to be all that arbitrary. So here are some Passamaquoddy words. Passamaquoddy is a Native American language, Algonquian language spoken up in Maine. There's a kind of bird in Passamaquoddy which is called kuhkukhahs. What kind of bird do you think that is, kuhkukhahs? It could be a cuckoo. Any other theories? It's an owl. So it's their name for a great-horned owl. They have another kind of bird called a kocokikilahsis, kocokikilahsis. What's that? STUDENT: [INAUDIBLE]. NORVIN W. RICHARDS: No, it's a chickadee. Chickadee, kocokikilahsis. So sometimes, there are things that are named after what they sound like. Japanese has a kind of an adverb, "pikapika." It refers to a kind of light, a quality of light. What quality of light do you think it is? STUDENT: Electric. NORVIN W. RICHARDS: Yeah, has to do with electricity, yes. Pikapika. STUDENT: Flashing. Flickering. NORVIN W. RICHARDS: Yeah, flickering, flashing. Yeah. So some of you may have heard of Pikachu. "Chu" is the noise that a mouse makes. It's like "squeak," so Pikachu's like "flash squeak," that particular Pokemon. So yeah, "pikapika" means a flashing brilliant light. If I had told you that "pikapika" meant a soft, gentle glow, that would have been kind of surprising. So there's something non-arbitrary about certain places in the sign. There are experiments on this where people show people shapes. And you show people either a jagged shape or a gentle shape with lots of rounded corners. And you say, "One of these is a dub dub, and the other is a tick tick." And people are, like, "Yeah, the thing with the jagged corners-- that's the tick tick." That's clear. I have that idea. But putting aside those kinds of cases, signs are arbitrary. In English, we call cats "cats." But in Japanese, they're "neko" and they have other names in other languages. And even in cases of onomatopoeia-- so these Passamaquoddy words are onomatopoeic. They are named for things that are named after the sounds that they make. But take something like a frog. They are onomatopoeic words for what frogs say in different languages, and they're pretty strikingly different from language to language. Everybody seems to be able to hear something different in the call of a frog. So if we're trying to make a list of everything that you know about your native language, if you're talking about English, one of the things that we've got to list is that you know the word "cat" means cat. And that's one of the many bits of information that's in your head. Doesn't follow from anything. We've just got to learn that. So now, let's think what else is in your mental lexicon, your little list of everything you know about your language? Well, this could be lexical entry number two, "cats." Maybe. So you could have a lexical entry for "cat" that says cat refers to these little, furry things that chase mice. And then you could have another entry, "cats," that refers to more than one of those things. That's one way we could do our lexical entries. But that seems kind of, I don't know, wasteful and inelegant. Because what we're failing to recognize, if we decide to have one lexical entry for "cat" and another one for "cats," is that those two lexical entries seem to have a lot in common. They both start with "cat." And they both refer to groups of things-- one of them just one of those things and the other one more than one of those things-- that are furry, and chase mice, and chase laser pointers, those things. And moreover, the difference between them has to do with this S that's at the end. And we see pairs of words that have that S at the end in lots of places. It's not just "cat" and "cats," it's "dog" and "dogs," and "banana" and "bananas," and "computer" and "computers." There's lots of things like that. So there are two kinds of lexicons that we could imagine having, then-- one where we have lexical entries for "cat" and "cats," and "dog" and "dogs," and "computer" and "computers," and so on, and another where we have lexical entries for "cat," and "dog," and "banana," and "computer," and "-s"-- a lexical entry for this other thing that you get to add to nouns to make them plural. And for one thing, if there's any premium at all on storage space, if it's at all useful to have a lexicon that's not all that large, well, lexicon number one is going to be larger than lexicon number two as long as you have any nouns in your lexicon at all. Because lexicon number one-- if you have n nouns, you need 2n forms, singular and plurals, for each of them. Whereas lexicon number two, if you have n nouns, you need n plus 1 forms-- forms for all your nouns plus "-s." I'm oversimplifying, obviously. Not every noun makes its plural with an S. Talk about that. But do people see what I mean there? So there's at least a prima facie reason to take seriously the idea that a word like "cats" is complex. It has two parts. There's the "cat" part and there's the "-s" part. Anyone want to object to that? Sorry. In some languages, it would be really wasteful to try to have a lexical entry for every form of every word. Here's a language of Papua New Guinea. It's called Nimboran. It has four different tenses-- future, and past, and recent past, and distant past. There are suffixes on the verb that tell you things about the subject and the object, where there are 14 different types of subjects and objects that it recognizes. So the subject could be singular, or dual, or plural. There are locative suffixes. There are aspects. The result is that if you have a transitive verb, it has 23 and 1/2 thousand forms. Whereas if you have a transitive verb and you are just going to list all of these suffixes, well, you need 49 of the suffixes. So I said for English, if you were trying to list every form-- "cat," "cats," "dog," "dogs," "computer," "computers," you would need 2n forms if you have n nouns. Well, in Nimboran, if you have v verbs, you need 23,520v lexical entries if you're going to list every lexical entry. Take pity on the Nimborans. Don't make them lists 23,520 forms in their heads. Allow them, instead, to list their verbs, and the suffixes that go on the verbs, and rules for how those things go together. That will be computationally much simpler. They will not have to store as much stuff. Does that make sense? So that's a hypothesis that we're going to take seriously that our mental lexicon-- yeah, it contains words like "cat," but it also contains parts of words like "-s," this suffix that you can add to "cat" to make it plural. And so we're going to try to understand how those kinds of things interact now. We actually have various kinds of evidence-- it's not just, well, it would save space in your lexicon-- Various kinds of evidence that human beings do divide words into these parts, that a word like "cats" is a word with two parts, "cat" and "-s." One straightforward one is the fact that these are productive. So if I make up a new word in English, if I tell you I have invented a cool, new machine which everyone is going to want to have in their house-- it's going to be extremely useful-- and it is called, let's see, I'll give it a name. It's called a Blurk. So that's my brand name. I'm going to sell the Blurk. I'll make millions. Don't worry, I'll remember all of you when I'm on my yacht, working on my tan. And the Blurk-- what's the plural of "Blurk" going to be? STUDENT: "Blurks." NORVIN W. RICHARDS: Yeah, if you want to buy more than one Blurk, as I recommend that you do, those things are going to be Blurks. And there's no difficulty about that. You know that English has this suffix that makes plurals, and you can add it to a word that you've never heard before because I just made it up. That would be surprising if your mental lexicon contained "cat," "cats," "dog," "dogs," "computer," "computers"-- just a bunch of words that accidentally kind of resemble each other. It looks as though that's not what your lexicon contains. It contains a bunch of nouns and a general principle for how you make things plural, which you get to apply to "Blurk." Does that make sense? It's been shown-- we've shown this classic series of experiments by Berko in the '50s. It's been shown that very young children can do this. We will talk more about this later, but she had these pictures. So she would show kids a picture of something. She'd say, "This is a wug," and then the picture of two of the things that she had created. And now there are two of them. There are two-- and the kids would all go, "Wugs." Later, people began selling things with wugs on them. They're a big deal among linguists. So one reason we have to think that you have lexical entry for "-s" is that it's a procedure that you can apply to novel words. Similarly, we seem to be hungry to create these things, these suffixes. Classic case is "gate." So the original gate was Watergate, which was before I was born, so a really long time ago. So sometime in the previous millennium, there was this scandal involving President Nixon and something that happened at the Watergate Hotel. The Watergate scandal was not a scandal about water. It got its name because it happened at the Watergate Hotel. But people took the "gate" of Watergate-- because Watergate is so easily divisible into these two parts, because "water" is a word-- people took "gate," and it's become a suffix now. So it's attached to words to give names for scandals. So Nixon had Watergate, Clinton had Monicagate. There was Irangate. There have been various gates. So "-gate" is now a suffix meaning "scandal." People seem to be hungry to create suffixes, even where they weren't there before. Yeah. STUDENT: So like "-burger" and "-mageddon"? NORVIN W. RICHARDS: Yes, right, like "-mageddon" or "-palooza." Yeah, there are many of these that people create out of whole cloth. That's a very good example. Other examples in the history of English-- there are words like "sculptor," and "beggar," and "swindler," which entered the languages as nouns. So "sculptor" is from a Latin verb. It involves a Latin suffix "-tor," which is used to create people who do things. And "beggar" and "swindler," similarly. It's an accident that they end in "-er." But "-er" in English is a suffix that you add to verbs to create nouns meaning person who does that. So you get "teach" and "teacher" or "sing" and "singer." And so through a process of what's called back formation, people heard these words that ended in "-er," and they thought, "Oh, that's '-er.' I know that from 'teach, teacher.'" And so they made up verbs that didn't exist before, like "sculpt," and "beg," and swindle by "removing" the "-er," basically, which hadn't originally been added. Similar thing with "pea." So originally, "pease" was a mass noun. It referred to this group of small, little, round, green things that you could eat lots of. So "pease" was a mass noun. It was like "water," or "sand," or "ketchup." But it happened to end in "-s," so it looks like a plural. And it refers to something that you can divide into lots of-- there are the individual little, round, green things. And so people made up a singular "pea." And that's where that came from. But originally, the word that we get from old English is "pease." And it was originally not a plural. People reinterpreted it as a plural. So our mental dictionary, our lexicon, seems to have these things, these parts of words. And in fact, so we can use them on unfamiliar words. And we seem to be hungry to create them in places where there's any suggestion that a word could be divided into parts. Yeah. STUDENT: When you use the word "lexicon," do we think of it as a dictionary of words and workings? Or are we talking about the rules as well? NORVIN W. RICHARDS: So it's day two. It's not too clear yet what we're talking about, is it? But I think right now, all we have to be talking about is a list of the words and the morphemes. And you're absolutely right-- another thing that we'll have to have is a set of rules for how those things combine. Actually, that's what we're going to move on to next. That's absolutely right. So for example, on day one, we talked about the fact that the S has different pronunciations depending on what you're attaching it to. We want to have an account of that kind of thing. Sometimes it sounds like an S and sometimes like a Z. Sometimes, there are nouns that don't take an S at all. So the plural of "ox" is "oxen." I'll have to talk about that, too. Good question. Other questions about this? These are all reasons to make you take seriously the idea that words can be divided into parts, that "cats" consists of "cat" plus a suffix, both of which are maybe listed in the set of things that you know in the lexicon, this list of things you know about your language. OK, good. So now, some terminology, including a word that Raquel just said, so get all of you caught up. Words like "cats," or "dogs," or "atrocity," or "culpable," or "unworthy"-- we're saying these are words that have multiple parts. We can productively divide them and it's useful to divide them into parts. Any part of a word, these parts of words, are what are called morphemes. So "cat" is a morpheme. "Dog" is a morpheme. "-s" is a morphine. So is "atroc-" and "-ity" in "atrocity." Those are all morphemes. So that's the name for these parts of words combined. People sometimes distinguish the root, which is the thing to which prefixes or suffixes are added, things like "cat," and "dog," and "culp-," and "worth-," and "atroc-." Those are all roots. So they are the things to which you are going to add suffixes or prefixes, other kinds of things. And then "-s" and "-ity" and "-able" and "un-," those are all affixes. These are all-- most of them are suffixes. One of them is a prefix. But "affix" is a general term for these things that you add to roots. And then there's one last distinction to draw, which is between things that are free and things that are bound. So to say that a morpheme is free is to say that you don't have to add anything to it. It can be a word by itself. So "cat" is a free morpheme. "-s" is a bound morpheme. So is "culp-" and "atroc-." So "culp" isn't a word, "culpable" is a word. "culp-" is a root, so it's a bound root. You need to add a suffix to it, but it's something to which you add suffixes. Yes. STUDENT: Are all three morphemes in English? NORVIN W. RICHARDS: Yes. Yeah, good question. Yeah. STUDENT: That doesn't necessarily apply [INAUDIBLE] NORVIN W. RICHARDS: Yeah, so clitics-- I wonder whether we will manage to talk about clitics seriously. But yes, you're right. There's another kind of thing-- "clitic" is this name for these things-- that needs to be attached to something else. Yes, that's a very good point. Any questions? So this is just all terminology. It's terminology you will hear me use, probably. A word can consist of just one morpheme, like "cat" or "dog." And then there are other words, like "cats" or "industrialization," that consist of multiple morphemes. And then one other distinction that people draw-- and I hope that nobody will ask me to define this one very carefully, because it's a quick and dirty distinction people draw. I promise that I will never harm you in any way for misusing this. So you will not get counted off on problem sets for this. People sometimes draw a distinction between what are called "open-class morphemes" and "closed-class morphemes." The idea is open-class morphemes are supposed to be morphemes for which you could make up another one. You could add to the class of open-class morphemes, that's why it's called open. So there are nouns like "xerox" and "laser" that are new nouns. They came into the language comparatively recently. When you discover a new thing, you come up with a name for it, and that's a new noun, you add it to the open class of nouns. Or verbs-- there are verbs like "google" and "fax," or adjectives like "cromulent," that have been added to the language comparatively recently, as opposed to morphemes like "in," or "at," or "on"-- prepositions where it's unclear that you can make up a new preposition or a new determiner, a word like "the" or "an," or a new auxiliary, a word like "will" or "has," in "The professor has slipped on a banana peel." You have that kind of "has." We don't seem to be able to do that. Now, we invented new nouns like "xerox" and "laser," or new verbs like "google" and "fax," because technology created new things that we could do. I suppose you could imagine a future world in which, say, time travel suddenly becomes available. And then it would be useful to have an auxiliary that means something like "happened in the past from our viewpoint, but from the future from someone else's viewpoint," or something like that. It's unclear whether, when that happens, we will just do that-- make up those auxiliaries-- or whether we will be neurologically unable to because, well, auxiliaries are closed-class morphemes. And you can't make more. Interesting open question that we'll try not to talk about anymore. All terminology. One piece of literary evidence for the open and closed distinction-- this is a point that my colleague, David Pesetsky likes to make when he teaches this class. It's a nice point. The poem "Jabberwocky" from Alice in Wonderland, Alice Through the Looking Glass, is a poem which is tricky to understand. So many of the words in it have been replaced by nonsense words. But crucially, the words that have been replaced by nonsense words-- they're all open-class words. So "'Twas brillig, and the slithy toves did gyre (HARD G) and gimble in the-- gyre (SOFT G) and gimble in the wabe," those blue things, they're all open class. They're nouns, and verbs, and adjectives, so adjectives like "brillig" and "slithy," nouns like "toves" and "wabe," and verbs like "gyre" and "gimble." And the poem only works because we're open to the possibility that there are open-class nouns, and verbs, and adjectives out there that we don't know, so that's the sense in which we can make sense of this poem. And you can tell that these are nouns, and verbs, and adjectives. Like "toves" looks plural. It ends in that same "-s" that we get in "cats," and so on. If you tried to do "Jabberwocky," but replace the closed-class morphemes with nonsense-- so I tried to do that here-- it's also difficult to understand, but in a different way. So "Jabberwocky," you read it and you think, oh, my vocabulary must be too small. I wish I knew what these nouns were. Whereas this thing, you just think, oh, the professor has some type of neurological problem. Someone should call the police. Something terrible has just happened. I won't even try to read it aloud. It would be too upsetting. So all I've done here is to try to go through-- so all of the adjectives, and nouns, and verbs-- there are real ones, and I've just replaced the auxiliaries, and the word for and, and the prepositions, and the terminators. I've replaced those with nonsense. And the result of that is disturbing. It's not the same as "Jabberwocky." That's a reason to take seriously the idea that open-class and closed-class morphemes-- they're different in a way that something in our brain cares about deeply. OK, cool. So what's in our lexicon? We have these entries that have information about sound. So here's a word. It's pronounced "cat." We're going to talk later about how to talk carefully about how to pronounce words, like what exactly you're saying when you say that it's pronounced "cat." But yeah, so one of the things that's known about this word is that it's pronounced "cat." Something about its meaning-- whatever. I showed you a picture of a cat before. Maybe that was better than what I've done here. And then the kinds of information that I've been talking about-- you have to list whether the morpheme is bound or free. Because that's not the kind of thing that is uniform across languages, necessarily. So English, for example-- we've now several times said that the plural morpheme is a suffix, an affix. It's a bound morpheme. It needs to be attached to something. In Tagalog, the plural morpheme is pronounced "mga." It's the boldfaced morpheme there. And it's not a bound morpheme. It's a freestanding word, so it doesn't have to be attached to anything. So the way you say "big bananas" in Tagalog is "mga malalaking saging." "Mga" is this freestanding word. It's at the beginning of that phrase. It doesn't have to be anywhere near the word for banana. So it's not attached to anything. Tagalog orthography is mostly fairly straightforward. This is one of the two words in Tagalog that's not spelled more or less the way it's pronounced. So that's the plural morpheme. It's pronounced "manga," this thing that's spelled M-G-A. So you need to-- for a lexical entry, a word that's in your lexicon, we're going to need to list how it's pronounced, what it means, whether it's bound or free. Here's another example of languages varying with respect to whether things are bound or free. In English, the word "friend" is a free morpheme. In Passamaquoddy, the word for "friend" is a bound morpheme. It needs to be attached to something. So in English, you say, "my friend." The Passamaquoddy word for "my friend" is "itap." There isn't a Passamaquoddy word "itap," so you have to add a prefix to the word for "friend" in Passamaquoddy, indicating whose friend it is. It can't just be a word by itself. So it's bound, needs to be attached to something else. It's not a complete word. So the lexicon of a given language has to indicate whether particular morphemes are bound or free. It has to indicate whether morphemes are prefixes or suffixes. So for example, English has suffixes indicating the past tense. So we have the verb "dance," and it has a past tense "danced," where we add this D to make it past. Lardil has prefixes that indicate that verbs are in the past tense. So the verb "to dance" is "luuli," and the past tense is "yuud-luuli," so there's a prefix "yuud-" that you add to the verb to make it past. So you have to indicate whether morphemes are bound or free, whether they're prefixes, or suffixes, or something else. So languages can vary with respect to what kind of affixes they have, and where they go, and what they mean. And morphology is the study of the rules governing this kind of variability. And that's what we're going to start the semester with, is spending some time studying morphology. Part of the work of morphology is doing morphological analysis of languages that are not familiar, so looking at data sets and trying to piece together what all the morphemes are, trying to separate out the morphemes and figure out what they all mean. You will surely be given problem sets in which you are asked to do this. So I wanted-- I think this is what I'm doing next-- yeah, I wanted to show you a data set of the kind that we might ask you to deal with. And let's work through it together and try to figure out what the different morphemes are. So here are a bunch of verbs in Swahili. Anybody speak Swahili? Kind of? Sort of? Try to suppress your knowledge of Swahili for a second. Nobody ask her what the answers are. Here are a bunch of verbs in Swahili. The first thing you want to do is look through all of these and look for words that have something in common in their meaning, and also seem-- and then try to figure out which part of the word is the morpheme that has that meaning. This is where I wish it was easier to show you things on a blackboard. But does anybody see a morpheme in this list? Oh, many people see morphemes. Yes. STUDENT: "ni" tends to indicate "I." NORVIN W. RICHARDS: "ni" looks like it means "I." So we've got "ni" in the first one, "ni" in the third one, "ni" in the fourth one, "ni" in "I will get it." Yeah, a whole bunch of "ni"s. Yeah, sounds good. Yes. STUDENT: [INAUDIBLE]. NORVIN W. RICHARDS: Say it again. STUDENT: [INAUDIBLE]. NORVIN W. RICHARDS: Oh, yes, the verb "got"-- that's absolutely right-- looks like it's "pata." So all of the verbs that have "got" as part of their meaning-- if you look through this, I think there are only two verbs here-- there's "got" and there's "hit." And "got" looks like it's "pata." What's "hit?" STUDENT: "piga." NORVIN W. RICHARDS: "piga," yeah, good. Yes. STUDENT: Actually, it seems like "pata" is more [INAUDIBLE].... And then there's "li" versus [INAUDIBLE].... NORVIN W. RICHARDS: So there's a "li." So you're seeing a "li." Where are you seeing a "li"? STUDENT: [INAUDIBLE]. NORVIN W. RICHARDS: So you get "nilipata" for "I got" and "nita"-- do we see nitapaka or nita-- oh, we've got "nitakipata" for "I will get it," don't we? Yeah. Sounds good. Let me move forward on the slide. So this is great. People are doing exactly what you want to do. You can look through all of this stuff, look for patterns that you seem to see, try to isolate those patterns to figure out what's going on. So one way to do this would be to start by finding the verb stems. I deliberately jumbled the presentation at the beginning, just put them in a random order. But one thing to do, and you guys didn't need me to do this for you-- you did it in your heads-- you make yourself a list of all the words that have "get" or "got" and all the words that have "hit." And then you would discover that "pata" is the one for "get" and "piga" is the one for "hit." And then maybe similarly, keep sorting them by their affixes. And if you kept doing that, you would discover that there was a subject prefix at the beginning that's either "ni-" or "u-" or "wa-." And then, that there is a "li-" prefix for past and a "ta-" prefix for future, and that there are also prefixes for the objects, "ki-" and "ku-" and "tu-" and "wa-." So if we kept doing what we were just doing, this is where we'd eventually end up. That's the analysis we would end up with. Is that clear? Does that make sense? That's what we would be asking you to do. We will surely ask you to do this on a problem set. So we'll give you a bunch of data, and say, go through this. Find all the morphemes. And that's the kind of thing we'd want you to do. If there's anybody who is secretly thinking to themselves, I have no idea what just happened, that made no sense to me at all-- don't panic. Send me email or talk to your TA during recitation. This is the kind of thing that we'll try to practice some before you have to do it for real. And then the thing to do-- test yourself. How do you say, "They will get us?" How do you say, "They will get us?" So that was Swahili. The Swahili involved some verbs that had a bunch of prefixes on them. And we have mostly been talking about prefixes and suffixes as the kinds of affixes that you add to things. Just as a warning, there are other kinds of affixes out there in the world. So I want to show you some of the other kinds of things that exist. Tagalog is kind of famous for having what are called "infixes." So it has these morphemes that sometimes go in the middle of words. We've got a bunch of Tagalog words. They're all in the past tense. They all contain a past tense morpheme. What's the past tense morpheme? STUDENT: "um." NORVIN W. RICHARDS: "um," yes, it's "um." That's right. So you see it. I think I highlighted it on the next slide. I hope I was smart enough to do that. Yes, there we go. So Tagalog has infixes. "um" goes-- where does it go? What's the rule for where it goes? Yes. STUDENT: [INAUDIBLE]. NORVIN W. RICHARDS: Yeah. STUDENT: Before the first vowel. NORVIN W. RICHARDS: Before the first vowel is another way to say the same thing, yeah. So in the words that start with vowels, it goes at the beginning because that's the first vowel. In the words that start with consonants, well, it goes before the first vowel. Exactly. Yeah, that works. Yeah, that works for Tagalog. Now, we have to be careful when we say what an infix is. So let me try to be careful about it. Because I think I was careless about it a second ago. So let me be careful. An infix is an affix for which the rules say something like-- the rules for where it goes-- say something like, put it before the first vowel, or put it after the first consonant, or sometimes, put it after the first syllable. A prefix is an affix for which the rule says, put it at the beginning. A suffix says, put it at the end. That's what infixes and prefixes and suffixes are. And I'm being careful about this because this is something people sometimes get confused about. A Swahili verb, like "I will get you there," doesn't have any infixes in it at all. It's got a verb, "pata," which is preceded by three prefixes, each of which was put-- the rule for "ku-" says, put me right before the verb. And the rule for "ta-" says, put me right before "ku-." And the rule for "ni-" says, put me at the beginning. Those are all prefixes. So none of them say, put me inside something else. None of those are infixes. They're all prefixes. And I'm saying that slowly and carefully because by the time you're done with all these prefixes, well, they aren't all at the beginning of the word. There's a string of prefixes, but they're all prefixes. Does that make sense, the distinction that I am drawing between prefixes that, through no fault of their own, find themselves not at the beginning of the word, and infixes which have rules like, put me before the first vowel. I don't care where the word begins. Does that make sense? That's how we distinguish these things from each other. And as I say, I'm going through it slowly because it's something that's confusing. It's something people get confused by. So if you're feeling confused, speak up. Tagalog has infixes. There are languages like Egyptian Arabic, not every Arabic, that have what are called templates. Here are a bunch of verbs in Egyptian Arabic involving "live" and "enter." Anybody here speak Arabic? Yeah, OK, so again, try to suppress your knowledge of Arabic for just a second. What do you think the morpheme is that means live? STUDENT: Maybe S, K, vowel, N. NORVIN W. RICHARDS: So what do those verbs all have in common, those forms? Yeah, they all have S-K-N. Or enter-- those all have D, X, L. Now X is that sound. This is something the Semitic languages are famous for. They have what are called triliteral roots, verbs for which the morpheme is just a string of consonants, and usually three. There are verbs that have two. And the other morphemes-- the morphemes that tell you things like tense, and who the subject is, and things like that-- are often vowels that you put in between the constants that make up the verb. It's called templatic morphology. So S-K-N is to live in, and D-X-L is to enter. And past tense with the third person subject is two a's. So you get verbs like "sakan" and "daxal." I'm probably saying them badly. Do you want to pronounce them for us? I'm probably not pronouncing them right. Can you pronounce them? STUDENT: [INAUDIBLE]. NORVIN W. RICHARDS: Say it again. STUDENT: [INAUDIBLE] NORVIN W. RICHARDS: Cool. Yeah, so that's how this type of morphology works. Tagalog again-- Tagalog has another kind of morphology, which is cool. It's called reduplication. This is how you form the future tense of certain types of verbs in Tagalog. So the future of "swim" is "lalangoy." The future of "eat" is "kakain." The future of "become tall" is tataas. What's the future morpheme in Tagalog? STUDENT: [INAUDIBLE]. NORVIN W. RICHARDS: Yeah. STUDENT: [INAUDIBLE]. NORVIN W. RICHARDS: Yeah, it's like copy the first consonant and vowel. That's how you do futures in Tagalog. It's specifically the first consonant. And Tagalog does have syllables that end with consonants. And you don't copy the consonant at the end of the syllable. But Tagalog has several different kinds of reduplication. Here's another one. If you want to say that something is rather adjective, there are a bunch of adjectives that start with "ma-." Not all adjectives do, but there are a bunch of adjectives that start with a prefix "ma-." And then the way to say that something is rather adjective is to reduplicate the first two syllables of the adjective. If the adjective is only two syllables long, then you reduplicate the whole thing. So you get "mataas-taas" is "rather tall," or "malapit-lapit" is "rather close." If it's longer than that, then you just copy the first two. So you get "matali-talino" for "rather intelligent." So reduplicative morphemes-- there are three or four different types of reduplication in Tagalog. You've got a morpheme which is a prefix, and it is sort of specified for phonological content, but just how many syllables. Most prefixes, you've got a prefix, and it's got consonants and vowels in it, and you put that before. You get a English prefix, like "un-" for "unhappy"-- that's a prefix. It means whatever it means. And it consists of the vowel, "uh," and consonant, N. And you put that before adjectives to make other adjectives that mean not adjective. Reduplicative morphemes are also prefixes or suffixes, but they don't have any specification as to what consonants and vowels are in them. You just copy whatever is already there. That's how reduplication seems to work. When I was first studying Tagalog, we had contests to see who could repeat the most syllables. I remember one that we tried to pass off on our teacher was [SPEAKING TAGALOG] which means, "they are regretting it a little bit all the time," or something like that. There's a Tagalog joke Tagalogs like to tell about Filipinos who are in an elevator, and there there's an American who's also in the elevator who is astonished when the elevator doors open, and one of the Filipinos, who's outside, asks the other, "babalaba," which involves the verb "baba," which is "to go down," reduplication, and the question particle, which is "ba." So going down is "babalaba." Tagalogs like the idea of an American having to react to that conversation. So how are we doing? So this is all a little tour through non-intuitive kinds of morphemes. So yes, there are prefixes. Yes, there are suffixes. But there are other kinds of morphemes out there in the world. And I want to warn you about those partly because you guys are all going to scatter and work on whatever languages it is you found to work on. And I want you to be prepared if you encounter some of these types of morphology out there. there Is another kind. O'odham, which is a language spoken by the traditional owners of the area around Tucson, Arizona, the Tohono O'odham, the desert people. Yes. STUDENT: [INAUDIBLE]. NORVIN W. RICHARDS: Oh, cool. So yeah, this is the language of your place. So it has what are called imperfect and perfect forms of the verb, which you can think of as being kind of like present tense and past tense. And if you look at these, we're kind of cheating because I've got the name of this type of morpheme up there at the top. If you look at these, it turns out if you study a lot about O'odham, you can convince yourself-- so if you look at them, you can see the imperfect forms are a little bit longer than the perfect forms. So the perfect forms are things like "neo" and "nei" and "hin." And the imperfect forms are forms like "neok" and "neid" and "hink." So your first hope might be that the imperfect versions involve suffixes, that you're starting with the perfect versions and adding suffixes. But if you look at these verbs, you can hopefully convince yourself that that's hopeless. The suffix is just kind of-- the consonant that's at the end just varies a lot from verb to verb. Does that sound right? So in fact, the going hypothesis about what's going on in O'odham is that this is a linguistically very unusual kind of morpheme. It's called truncation. The way you form the perfect form of the verb is to take the imperfect form of the verb and remove its last consonant, whatever it is. So it's called truncation. You're not adding something. You're subtracting something. Morphology usually doesn't subtract, it usually adds. But this seems to be what's happening in O'odham. And then there are other things-- tone. So there are tonal languages in the world. If you've had the interaction with Mandarin, for example, you've heard of tonal languages, languages in which the pitch of your voice makes has an effect on the meaning of the words that you're saying. So there are languages out there in which there are morphemes that are tones. So Dinka, which is a Nilotic language spoken in South Sudan, there's a popular Dinka name, "Bol," which is a low-tone noun. It can be inflected for case, so to indicate that it is the possessor of another noun, for example, if you want to say things like "Bol's brother." But what you do is to change the tone of "Bol." So instead of just being "Bol," it's now "Bol," so it's a falling tone. So Bol's brother is "manh e Bol," so brother of Bol, and "Bol" changes its tone. There was an NBA basketball player for a while who was Dinka, Manute Bol. It's apparently a really common name. We spent a semester trying to understand Dinka in a graduate-level Field Methods class. Most Dinka morphology is tones, and vowel length, and other kinds of things. We were always deathly afraid that we were missing half of the morphemes because they're quite hard to hear. And then-- this is the last thing before I let you go-- there are arguably cases of morphemes that are not pronounced at all. So there are words like "cat," and "dog," and "sheep," and there are plural morphemes like "-s" And then maybe the plural of "sheep" also has a plural morpheme. It's just that that plural morpheme is pronounced-- You can't tell, but I'm opening my mouth. So maybe there are morphemes that have no pronunciation at all, they're just morphemes. You shouldn't necessarily believe that just looking at this, but it's a possibility that we might want to take seriously. All right, good. I think this is probably a good stopping point. So just to summarize, and then I'll let you go-- for every morpheme, we're going to want to indicate what's it sound, what's its meaning. Is it a bound morpheme or a free morpheme? And if it's bound, what kind of bound morpheme is it? Is it a prefix, a suffix, an infix, any of these other kinds of things? All kinds of things you'd want to understand if you want to understand everything about the morphology of a language. We'll pick it up here next time. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_16_Syntax_Part_6.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: All right, why don't we start up? So we're continuing to do syntax. I think I promised you that I was going to have a look at the syllabus, and I sort of did. I think if-- I think we will probably finish syntax not today, but on Thursday. And then we will stop doing syntax. So if there's anyone who is tired of syntax, I'm sorry. Please bear with it for another week, and then there will be no more syntax for a while. This is today's "XKCD." It's not true, as far as I know, or maybe I'm the only one who didn't get a word. That's-- I guess that's another possibility. We'll have to ask-- ask your TAs in the sessions tomorrow. OK, so one topic that we keep coming back to is the fact-- seems to be a fact-- that although there are many kinds of languages in the world, there are not as many as you could imagine. So various types of languages, places where we've seen that languages make choices with respect to how they do this or that or the other syntactic thing. But you don't find all of the choices that are logically possible. So for example, we've seen that there are languages that move their wh-phrases to the left periphery of the clause and there are languages that just leave their wh-phrases where they are, but there don't appear to be languages that move their wh-phrases anywhere else. There aren't languages that move them to the end of the clause or that put them in the exact middle of the clause or anything like that. So there's variation, but not as much variation as you could imagine. I want to talk about another way, another case of that kind, a place where there's more than one way to build a language, but there aren't as many ways as there could be. So we're going to talk about the embedded clause of this sentence, "I thought that Mary ate sushi with chopsticks." And actually, let's-- just as a class exercise, let's diagram that embedded clause. So don't worry about the matrix clause. So that-- what's her name? "Mary--" oh dear, I'm going to make it future-- "will eat sushi with chopsticks." So let's draw a tree for this. Maybe we could start by labeling everything. What's "chopsticks?" What type of word is that? AUDIENCE: Noun. NORVIN RICHARDS: It's a noun. And "with?" AUDIENCE: Preposition. NORVIN RICHARDS: Preposition. And "sushi?" Noun-- a particularly delicious noun. And "eat" is a verb. And "will" is a tense, and "Mary" is a noun-- one of my favorite nouns. And "that?" It's a complementizer, C, excellent. And if anybody is sitting there quietly thinking, "What? Where did all that come from?" ask your TAs, or we can talk about it now. Is there anything on here that people are like, wait, no. Why did you do that? No? OK. All right, so now let's begin merging things. Somebody give me two nodes on this tree that I ought to merge. Faith? AUDIENCE: "With" and "chopsticks" NORVIN RICHARDS: "With" and "chopsticks." And what label should the resulting thing have? AUDIENCE: Preposition. NORVIN RICHARDS: Yeah, this is a prepositional phrase. And because this is the highest thing with the node N, I'll give it a P too, so that's now a noun phrase yeah? Joseph? AUDIENCE: "Eat." NORVIN RICHARDS: "Eat" and "sushi." And what label should I give this? AUDIENCE: Verb? NORVIN RICHARDS: Yeah, it's going to be a V bar in the end. I'll just call it a V for now. And now this is an NP because its label didn't project. Yeah? What else should I merge? AUDIENCE: "That Mary?" NORVIN RICHARDS: "That Mary?" Well, I could, yeah. And what would I project there? AUDIENCE: A complementizer phrase. NORVIN RICHARDS: This is going to be a complementizer phrase? OK, possibly. And then this would be a noun phrase. We did that, yeah. Any other nodes here that I ought to merge? Including nodes that we've made in the course of doing this. Yes? AUDIENCE: "Eats sushi with chopsticks." NORVIN RICHARDS: Yeah, "eats sushi" and "with chopsticks." And what label will that have? That's a verb phrase. Cool. And so now this is a V bar. Joseph was right. Anything else I should merge? Yeah, sorry. Go ahead, Faith." AUDIENCE: The two words that should be-- NORVIN RICHARDS: I think we should merge these two things, yes. And we should give, as you just said, the whole thing the label T. Yeah, you're right. Is that what you were going to say? Yeah. What else should I merge? Yes? AUDIENCE: Just a question. NORVIN RICHARDS: Yeah. AUDIENCE: Could you quickly explain the bars again, versus the-- NORVIN RICHARDS: Oh, so here we have three nodes. This is a nice example here. We have three nodes, all with the label V. P is just the name for the highest one, yeah. And the lowest one doesn't get a mark, or sometimes you'll see people put a raised zero just to mark the fact that it's the lowest form. I haven't been bothering with that. And then everything else is given the label bar. It's-- yeah. Yeah, yeah. Just higher than zero and lower than P. Syntactic math-- we count 0, bar, P. That's how we count. What other things should I merge with each other? Does anybody remember the EPP? One of my favorite Ps. It says that TP must have a specifier. So it's responsible for the fact that you can't just say-- you can say things like "is obvious that syntax is fun." You put this "it" here and this "it" is what we were calling an expletive. It's a meaningless thing that you put there so that TP can have a specifier. If I were going to draw a tree for "it is obvious that syntax is fun," it would have a TP and "it" would be in the specifier of that. We'd have a verb phrase, "is obvious," blah de blah. So I'd have a tree sort of like that one. Joseph? AUDIENCE: What does EPP stand for? NORVIN RICHARDS: Well, I encouraged you not to worry about the answer to that, but the answer is that it stands for "extended projection principle." And it is also a parameter-- sorry, let me see if I can spell "principle" correctly. It is also a parameter in that English has it-- and there are other languages that have it, like French-- but it's actually not all that common. There are plenty of languages out there that don't have this. So there's a general principle that TP must have a specifier. That is, there must be a structure like this one where there's a TP that has, as its daughters, something-- it's usually a noun phrase-- and then a T bar. So we're not yet there. This TP doesn't have a specifier, so it needs one. Yes? AUDIENCE: Can "that" be a specifier? NORVIN RICHARDS: So in a sentence where a CP is the subject of a clause-- so in a sentence like "that syntax is fun is obvious," where the subject of the predicate "is obvious" is this CP, "that syntax is fun," where "that" is its head, then yeah. So if this were a clause, then yeah, it could be the specifier of TP. Joseph? AUDIENCE: Could "Mary" not merge with the T bar, and then T merges with-- NORVIN RICHARDS: I think we might want to think about it that way, yeah. We might want to make "Mary" the specifier of TP, satisfying the EPP, yeah. And then all we've got left is a C, and we can make that C the sister of this TP, yeah. Is that a tree that people are not too unnerved by? All of you have all the nerves you had before you looked at it. Yeah? OK. All right, so there is a tree for the embedded clause, the boldfaced embedded clause "that Mary ate sushi with chopsticks." And here is hopefully the same tree. Oh, I put it in a "could," yeah. So "that Mary could eat sushi with chopsticks," same deal. OK? All right. Now let me call your attention to a fact about this tree, which some of you may already have noticed. A general rule when you're drawing a tree for English, that if you have a head and the head has a sister, the head goes before the sister. So we have prepositional phrases like "with chopsticks" and verb phrases that have-- if they have an object, you get the verb before the object, "eat sushi." And T, like "could" or "will," which we have on the board, precedes its complement, the verb phrase. And C, the "that" which is up there, precedes its complement, which is the TP. We have all these blue arrows. On the tree, the blue arrows are just meant to-- you don't have to draw them if you're drawing an English tree. They're just there to dramatically represent the fact that here we have all these heads, and they're preceding their complements. Yeah. What would English look like if heads followed their complements? Well, you'd get weird orders like that one-- "Mary chopsticks with sushi eat could that," which is not English. But it is Japanese. So in Japanese, the way you say "that Mary could eat sushi with chopsticks" is literally something like "Mary chopsticks with sushi eat good that." If any of you have studied Japanese or are thinking about studying Japanese, be aware that this is something you'll have to learn to cope with, saying your sentences in this different order. So you say, [SPEAKING JAPANESE]. And not just Japanese, but well, lots of other languages as well. So that's the basic word order for Tibetan and for Korean and for Navajo and Basque and Chaha and a zillion other languages out there. It's, cross linguistically, a very common word order. In fact, if you just count languages, it might be the most common word order. It's slightly more common than the English style word order. OK, so here's a single switch that you can flick, right? Do your heads precede your complements or do they follow your complements? So in English, the heads precede the complements. In Japanese and lots of other languages, they follow the complements. There's one basic difference between languages. Now it used to be that this was the point in the class where I would tell you that, and I would wait just long enough for you to be impressed by that, and then I would quickly change the subject and hope that none of you spoke German. Because there are languages out there, sadly, in which some heads are initial and other heads are final. That is, some heads precede their complements and others follow their complements. German is such a language. So for example, languages with mixed headedness-- in German, here's German for "that Mary could eat sushi with chopsticks." And you can see the German complementizer precedes the clause. So the German compromiser for that is "dass," and it goes before the clause, just like in English. And German has prepositions, so "with chopsticks" is "mit Stäbchen." So "with" goes before "chopsticks." You know, German has prepositional phrases just like English. But German verbs and German tense, whatever we put in T, these kinds of auxiliaries, at least in this kind of clause-- we'll come back to this-- they come after their complements. So if you're saying in German, "I thought that Mary could eat sushi with chopsticks," the word order is literally going to be something like "I thought that Mary with chopsticks sushi eat could." Mark Twain has a great essay called "The Awful German Language" in which he says many hilariously partially accurate things about German, one of them being that-- he says something like, when a German dives into a sentence, that is the last you will see of him until he emerges from the other side of the Atlantic with his verb in his mouth. Because he's making fun of the fact that German often has the verb at the end of the sentence, and goes on and on about how it's very easy to forget what exactly is being done to these things. They're just being named, and you find out the verb weeks later. So, yeah. So in English, heads precede their complements; in Japanese, heads follow their complements And in other languages, some other languages like German, you get both. There are some heads that precede complements and others that follow their complements, OK? Yeah, Faith. AUDIENCE: Is there any rule for which heads precede and which follow? NORVIN RICHARDS: Ah, good. Good question. That was the next question I was going to ask myself, rhetorically. Thank you for asking me non-rhetorically. You might wonder, OK, fine, so do we just have to say, for every head in every language, this head precedes, this head follows? Is there any rule? Do we get to say anything more interesting than that? It turns out that there are kinds of systems that don't exist, which is kind of interesting. So let's concentrate on the heads T and V. So we're going to look at embedded clauses like this one, where you've got something in tense, an auxiliary of some kind, and you've got a verb, and there's also an object. And we're going to look at the ordering of those two heads, T with respect to the verb phrase, and the verb with respect to the object. If you do that, here's what you find. You get languages, like English, in which "has" and "read" both precede their complements. You get languages like German, in which "read" and "has" both follow their complements. That's why they're both blue. You get languages like West Flemish, which is a language closely related to Dutch-- spoken, I assume, in West Flem-- [STUDENT SNEEZING] --in which the auxiliary-- bless you-- the auxiliary precedes the verb phrase, and in which the verb follows the object. So in West Flemish, you say that "John wants a house to buy." I wasn't able to do-- I don't speak West Flemish, so I wasn't able to do "that John has read a book." I guess I should make all of these "John wants to buy a house." So OK, so you get languages like English, in which both of those precede their complements, both the T and the V precede their complements. You get languages like German, in which they both follow. You get languages like West Flemish, in which the auxiliary precedes and the verb follows. But you never get the fourth imaginable kind of language-- not ever. People have looked quite hard. Where I, again, am using silly diacritics to emphasize the fact that this language doesn't exist. I'm making it up, yeah. So there are no languages in which you say "that John read the book has." That's not a possible human language, for some reason. So we get English, where those heads both precede their complements. We get German, where they both follow their complements. We get West Flemish, where the lower head follows the complement, is head final. So the verb follows the object, but the higher head, T, precedes its complement. The T is head initial. You never get the mirror image of West Flemish. That doesn't happen. It not only doesn't happen, but it sometimes fails to happen in kind of interesting ways. Here's a fact about Finnish-- Finnish word order is mostly sort of English-like. T and V both precede their complements. But just if you're asking wh-questions, for some reason, Finnish word order becomes quite random. So you can ask questions like "When would Jussi have written a novel?" in the English word order, where both of those heads are red because they're preceding their complements. You can ask it in the German word order, where both of those heads are following their complements. So you're literally saying, "When Jussi a novel written would have?" You can say it in the West Flemish order, where the auxiliary precedes the verb phrase and the verb comes after the object. But you cannot say it in the cross linguistically unattested order. So it isn't just that there are no languages like that. Even in languages like Finnish, where there's a fair amount of freedom of order, that order is ruled out. Which is weird. It would be nice to have a theory of that. People have worked on theories of that. There's something called the Final-over-Final Constraint that's been offered. It's called the FOFC by people who are into it. What the FOFC says is, at least for certain parts of the tree, if you have two heads, A and B, and A has, inside its complement, another head, B, then if A is head final--- if A follows its complement-- B also has to follow its complement. It's as though, as you're building the tree-- we've been building trees sort of way we built this one, where you start-- you're doing repeated merge, right? You start at the bottom of the tree, and you keep adding things, and the tree gets larger and larger. It's as though, at the beginning, you decide whether your heads are going to proceed or follow their complements. And then you can switch to being head initial, but just once. After you've decided to be head initial, you have to be head initial from then on. You can't go back to being head final, something like that. You can switch from being-- as you're building the tree from the bottom up, you can switch from being head final to being head initial. You can switch from having heads follow their complements to having heads precede their complements. That's what West Flemish does, where it has the verb after the object and then it has the auxiliary before the verb phrase. But you can't switch in the other direction. That's what this seems to say. For certain parts of the tree-- we may get a chance to talk more about that. Yeah? AUDIENCE: What is head and complement? NORVIN RICHARDS: I'm sorry. By head and complement, I just mean-- so heads are things like this, nodes that just dominate a word and don't contain anything else. And the complement of the head is its sister. So here's a head and here's its complement. So this VP is the complement of this head, or this head has, as its complement, the "sushi," or this head has its complement this noun phrase, "the chopsticks." And the observation is that if you have-- you can have final heads lower down. Yeah, you can have heads that follow their complements lower down and heads that precede their complements higher up, but you can't have the opposite. Yeah? AUDIENCE: So, in this example, we're saying that we have phonemes. We don't get to-- NORVIN RICHARDS: Yup. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: So yeah, what we're saying is, if we go back to these trees-- what we're saying is, in the tree on the far right, because the V precedes its complement, the higher T must also precede its complement. That's what the Final-over-Final Constraint is meant to say. If the lower head follows the complement, then it's OK for the higher head to precede or follow. But if the lower head precedes, then the higher head must also precede. It cannot follow. I hope I wrote it right. If A follows its complement, B must also-- wait a minute. Yeah, yeah, yeah. A follows its complement, B must also-- yes, yes, yes. So in the trees-- in the even-numbered trees where the higher head follows its complement, what we're seeing-- where T follows its complement. So that's the A. What we're seeing is that in trees like that, the lower head must also follow its complement. It can't precede. So that's what the Final-over-Final constraint says. It's an unfortunately-named constraint. It's misleading, but that's what it-- that's what it does. OK? So we get that kind of mixed headedness, the West Flemish kind, but not the kind on the right. Yeah, so-- oh, I mixed up my terminology here. I'm sorry. I'll fix this slide before I post it. So this is the FOFC violation. T-- I called it I, which I shouldn't have done. That's an older word for TP. T has V in its complement, and T follows VP, but V is not following NP. That's what makes that a Final-over-Final Concern violation. OK? For certain parts of the tree. Now, for certain parts of the tree, there have to be restrictions on how this applies. So German, for example, we've just said that if the higher heads are final, the lower head also has to be final. So the fact that the T is final means that the V also has to be final. So far, so good. But the P is head initial, so the people who do the FOFC have to talk about domains of the tree in which the FOFC applies. It applies to the relationship between T and the verb, but not to the relationship between, say, the verb and the prepositional phrase-- prepositional phrases. It's like they talk as though-- the FOFC kind of starts over at certain points. So you calculate it within the prepositional phrase, or you calculate it within the noun phrase, but then you start again. There are other parts of the tree to which the FOFC applies, but not across those boundaries. Lots of work on trying to figure out why that would be. Lots of questions about the FOFC, like why is it true and which chunks of the tree is it true in? But I'm telling you about it because it looks like another case of a linguistic universal, so another place where, yes, there's more than one kind of language. There are head initial languages like English. There are head final languages like Japanese. There are languages like German, which have some initial heads, and other final heads. But you don't get every logically possible combination of initial and final. That's the result of this. And the FOFC is meant to describe that. And then, of course, we want to know why the FOFC is true, if it turns out to be true. Yeah? OK. Does it make sense? Do we have a question about this? FOFC? I'm sorry, I called you FOFC. Faith? I will stop calling you FOFC. AUDIENCE: Just a question about linguistic universals. So is it always that something doesn't exist? Or are there any linguistic universals where we can say, like, oh, this is the case in every language? NORVIN RICHARDS: Oh, oh, oh. I mean, sometimes this is just a matter of stating the universals in a particular way. So every language has the property that it obeys the FOFC. There. There are some other things sort of like that. So for example, it is true of every language in which the verb comes at the beginning of the clause that wh-movement happens overtly, at least optionally. So we said there are languages that do overt wh-movement. There are languages that have wh-in-situ. There are languages that have both options. And there are verb initial languages that have both options, and there are verb initial languages that have overt movement. There are no verb initial languages with obligatory wh-in-situ. That never happens. And again, we want to understand why, but that's a fact. These parts where I keep saying we want to understand why, that's because there is work on this. I have a book that tries to derive that fact I just told you about. But I feel-- I would feel bad teaching you theories that I have posited in a book because I could be wrong. I know, it's hard to believe, but I could be. I'm only trying to tell you things that are definitely true in this class. Good question. Are there other questions? OK. All right, so that was it for German. Oh right, sorry. Back to German. OK, so here we are doing German. Now I said-- I think I said this accurately, that in this kind of clause, in German, the verb comes after the object, and tense comes after the verb phrase. And I think I saw people who know some German furrow their brows at me because that's not the word order in a German main clause. So I want to talk now about the word order in a German main clause. Here's the word order in a German main clause. You say in German things like "Mary could with chopsticks sushi eat." So the verb is still after its object, but T is no longer at the end. It's now, well, earlier. It's right after "Mary." This is a property of German main clauses that the thing that's in T, if there is something that's in T, has to be preceded by exactly one phrase. So you can say, "Mary could with chopsticks sushi eat." You can say, "With chopsticks could Mary sushi eat." You can say "Sushi could Mary with chopsticks eat." You get to take some phrase and put it first. And then what's in T-- what would be in T, "could," that has to be second. This is called verb second. These are examples where there's an auxiliary in second position. If there is no auxiliary, then the thing that goes in second position is the verb. So if you wanted to say "Mary eats sushi with chopsticks," so there's no auxiliary, you would take the verb "eat." You'd put it in a different form. It would be [GERMAN] and it would go in the place where "could" is going in these sentences. Something else would be in first position. So this is a phenomenon called V2. It's called V2 because you must take exactly one phrase and put it first, and then T or the verb goes second. So you can say all of these things in German, but you cannot say, for example, "With chopsticks sushi could Mary eat," or anything like that. There can't be two phrases in the first position. There must be exactly one phrase in the first position. So verb second-- German clauses, German main clauses-- actually, we'll refine that in just a second. They have to start with exactly one phrase followed by the "verb," where I've put "verb" in quotes. Because as you've seen, it doesn't have to be the verb. It's actually whatever tense is on. So if there's an auxiliary, then that's the thing that goes first. If there's no auxiliary, then tense is realized on the verb. The verb is pronounced with tense morphology, and that's what goes in second position. Why am I telling you all this? Well, partly so that you can be more fully educated people. Now you know some things about the grammar of German. But actually-- oh, sorry, yes. That's the wrong rhetorical move to make. Sorry, more German. No, actually, let me finish the rhetorical move I was just making. I'm telling you all this about German partly to show you some cool things about German. But what we're going to see shortly is that this is not just German. V2 is a cross linguistically common phenomenon. There are many languages out there that are V2. So learning this fact about-- these facts about German will do you some good when you are confronted with, for example, Kashmiri or Dinka or a number of other languages that are out there which are also V2. OK, but first, I just said German main clauses, and then I waffled a little bit about whether I meant main clauses. It isn't actually just main clauses. What it is is clauses that don't have complementizers, don't have overt complementizers. So main clauses are V2, but also embedded clauses as long as they don't start with "dass." So you can say there are what's called embedded V2, where you have an embedded clause like the one on the German example at the bottom of this slide, where you're doing the V2 phenomenon again. And I can't remember whether I proved that. Yeah, so I didn't. But you could put any phrase in the place of "Mary." All the shenanigans that we did with the main clause in the last slide, you can do with the embedded clause here. So he said, "Mary wanted with chopsticks sushi eat." You could have put "with chopsticks" or "sushi" in the place where "Mary" is, and then "Mary" would have to be after "wanted," right? So there's one phrase that goes before the auxiliary in that embedded clause, in case of embedded V2. What's the difference between this type of embedded clause and the kinds that we looked at in the first German sentence when we were just talking about mixed headedness? Well, it has to do with whether there's a complementizer. So if your embedded clause starts with "dass," which is the German word for "that," the German complementizer for declarative clauses, then you don't get V2. And main clauses which don't have a complementizer, just like in English, also have V2. So V2 happens whenever there isn't a complementizer being pronounced. A hypothesis that people have had about that goes like this. The tree for German is exactly the tree that I drew for you before. It's got a complementizer that precedes its complement. And then under that complementizer, everything is head final in the verbal part of the clause. So the auxiliary follows the verb phrase and the verb follows its object. And then, yes, German has prepositions ahead of postpositions because the FOFC resets itself across prepositional phrase boundaries. So that's the tree that we drew before for German clauses. Complementizers go before the TP. And then what we're learning is that if nothing is pronounced in the complimentizer-- if you don't have a "dass"-- then two things happen. First, some phrase-- you get to pick some phrase at random-- moves into the specifier of CP. So here I've chosen to move the "sushi" into the specifier of CP. And also, some head moves in to C. The consequence of these two movements is the pattern that we just saw. First of all, the auxiliary, although we've seen that when there is a "dass," the auxiliary is head final. The auxiliary goes where the complementizer would be. It becomes head initial. So it goes almost at the beginning of the sentence, but not quite. There has to be exactly one phrase to its left. And the story is that's the phrase that's at the left edge of CP. Raising many questions, like "Why, Germans? Why are you doing this?" But as I say, they do, and there are many languages out there that do this. There's a lot of interesting work on how you choose which phrase to randomly move into first position. It has consequences for the interpretation of the sentence which are pretty subtle and difficult to talk about. There are people who talk about them, try to figure them out. They have to do with which phrase you're trying to draw someone's attention to, or so on. It's very difficult stuff to talk about seriously. But since we're not doing semantics yet, since we're doing syntax, we can just say this is a thing that happens. Some phrase goes into first position, and then whatever's in T goes into second position. So you are moving a phrase into the specifier of CP and you are moving a head-- in particular, the head T is moving into C. That's how you get the German V2 order. OK? You're stunned by German. It's understandable. It's pretty stunning. Now this is the part that I thought I was going to tell you before, but now I will. It's not just German. So there are languages, some of them completely unrelated to German, others only very distantly related to German, that have exactly this setup. So Kashmiri is an Indo-Aryan language spoken in Kashmir in which there must be some phrase in first position. And so you say things like "Ram gave Sham a book." Some phrase has to go first, and when there's no auxiliary, the verb goes right after that first phrase. But when there is an auxiliary, like in the second example, the auxiliary goes in that second position. And the verb, in this case, goes at the end. And the thing that goes in first position doesn't have to be the subject. You can see that in the last example, where you've put an adverb in first position. So Kashmiri is just a very, very odd dialect of German, basically, right? So the word order for-- the rules for word order for Kashmiri resemble the rules for word order for German to a surprising degree. I was making a joke when I said that it's a dialect of German. It's not. They're both Indo-European languages, so they're related, but extremely distantly. They don't have anything else in common, but they have this in common. Vata, which is a Kru language of Ivory Coast, literally not related to German, also a V2 language. And a bunch of others-- Karitiana, which is a Tupian language spoken in Brazil. Ingush, which is a Nakh-Daghestanian language spoken in the Caucasus, Dinka, which is an Nilontic language spoken in South Sudan, V2 is a thing. It's not a hugely common thing, but it's a thing you find scattered all over the world. There are lots of V2 languages out there. That's one of the factory options for language word orders, being V2. Lots of languages out there. So V2, the verb second, is all over the place. And in our system, what it consists of is head movement to head initial C and movement of some phrase, some randomly chosen phrase, to the specifier of CP. OK. now why am I making such a big deal about this? We know that complementizer can be final. So German complementizers are head initial. We know that by looking at them when they're overt. So German has this complementizer, "dass," which is their version of "that." It's actually related to the English word "that." We can see that it is head initial. And we can also see from German V2 that German V2 involves moving the auxiliary-- or if there is no auxiliary, the verb-- into C, which causes it to be in second position. So now it precedes the verb phrase, precedes the rest of the clause, and there's some phrase in its specifier. We know that there can be languages in which C is head final. I showed you that earlier. There are languages like Japanese in which C goes at the end. Japanese has a word for that too. It's [JAPANESE]. It goes after the clause that it combines with. So it wouldn't be hard to build the language that would be the mirror image of German, right? So this would be a language in which you'd have your clause, your TP, and it would have in it whatever it would have, and there would be a head final C. And you would move some phrase, XP, into a specifier of CP that would be over here. This wouldn't be hard, right? This is just German, except that in German, the C is over here and the complementizer that you're moving into is over here. So this is German. There is some phrase in TP which is moving into the specifier of CP, which is over here. C is over here, and this is V2. This is the V2 word order. This is German and Kashmiri and Kru and Dinka and, and, and. There are lots of languages like that. So are there any languages like this? No. None. Lots of languages like this, no languages like that. That's not a thing. Raising lots of questions like, well, wait. Why is that? Because it's not-- right? It's not hard to describe that language on the left. But it doesn't-- sorry, that language on the right. The language on the left is quite common, cross linguistically well attested. Language on the right doesn't happen. So could there be a language like this? No such language has ever been found. And although there are verb second languages, plenty of them-- and so forget about trees for a second. Suppose you thought-- all of these trees I've been showing you, put them out of your mind. We should really just be thinking about verbs and subjects and objects and things like that. We should forget about trees. It's a fact that there are verb second languages. Again, plenty of them, scattered all over the world, not related to each other, not the result of contact. Yeah, it's apparently one of the factory options for having a human language, is to have it V2. So you might imagine that there could also be direct object second languages, or subject second language. Languages where the subject had to be the second thing and you could put anything you wanted to before the subject, right? Or languages where the direct object had to be the second thing, and you could put anything you wanted to before the direct object. I wouldn't know how to draw a tree for a language like that, but it's not any harder to describe it in words than it is to describe a verb second language. Do people see what I mean? This is a reason to take trees seriously as a way of describing languages, because they make it easy to describe German-- we just did it-- but hard to describe these other imaginable languages, direct object second languages or subject second languages. And the fact is that those languages don't exist. So the German kind of language, the verb second kind of language, reasonably common in the world. But these other languages that are not too hard to imagine, they don't happen. So we're at another one of these frustrating points in the class where I've shown you a mystery. There are verb second languages. There are no verb penultimate languages, mirror image of German. There are verb second languages, there are no direct object second languages. And when I showed you the FOFC, we were at a similar point. The FOFC, it seems to happen, yeah. So there are languages like English and Japanese and German and West Flemish. There's another logically possible kind of language where the verb phrase is head final and the TP-- sorry, where the verb phrase is head initial and the TP is head final. That language doesn't exist. And in languages like Finnish and Basque-- actually, I didn't show you Basque-- where word order is fairly free under certain circumstances, that particular word order is ruled out. And then I said, sure would be nice to know why. And if this were an intro to syntax class, this is where I would start trying to show you why, or showing you people's theories of why. But instead, this is 24.900, and we've already spent many days talking about syntax, and I really have to start teaching you other things soon. So I will just tell you, if you would like to know the answers to these questions, go take more linguistics classes. Yeah, these are the kinds of things linguists talk about and try to figure out. So, moral of all this-- there are different kinds of languages in the world, but the languages that we find in the world differ in ways that are constrained. So we don't find every imaginable kind of language. There are gaps-- sort of interesting gaps. Sometimes, gaps that you can define the boundaries of, like with the FOFC, where you say, yeah, languages get to decide for particular heads whether they proceed or follow their sisters, but there are certain patterns that you don't find, and we want to have a theory of why. Wh-movements-- there are languages that have wh-movement. There are languages that lack it. But if you have it, it goes to the left. It never goes to the right. There are languages that have V2. There are languages that don't have V2. But there aren't languages that have V next to last, a mirror image of V2. There is variation among languages, but there are kinds of things that you don't ever find, certain peculiar patterns that resolutely fail to show up. This is the kind of thing we're talking about when we talk about universal grammar. So that's a phrase you may hear people talk about. Sometimes you hear it used as a term of abuse. So there are people who think that it's a dumb idea, you know, that it's something that linguists care about for some reason, but nobody else should. But this is what we mean when we talk about it. We mean it looks as though part of being a human being is having the kind of mind that can build language in some ways, but not others. And that's what universal grammar is. It's kind of a bad name for it, because it sounds like we all start out with the grammar of, I don't know, Basque, you know? And then we learn our native languages by manipulating the universal grammar that we all start with. That's not the idea. The idea is, we're constrained in the kinds of linguistic hypotheses that we can have. And so we come preloaded with some instructions about how to build language, and there are certain kinds of options that we know better than to even consider when we're learning our first language. That's what universal grammar is meant to claim. Questions about any of that? Does any of this make sense? This is meant to inoculate you against-- so in popular science, you will sometimes read people claiming that universal grammar can't possibly be true or it's dead or it's been disproven or whatever. And this is all hooey. You should throw away any papers that seem to claim that. OK? All right. So, yeah, in this particular case, no V-penultimate, no wh-movement to the right. It's as though maybe heads can either proceed or follow their sisters with certain restrictions, as we've seen in the FOFC, but specifiers maybe always proceed their sisters. And again, we'd want to know why. OK. So we've now-- so here I am pivoting. You can tell because the font size is changing. We've now seen several different kinds of movement operations. So we've talked about the wh-movement, which is what makes questions like "What did you buy?" where "what"-- wh moves to the specifier of CP, which is on the left periphery of the clause. We've now talked about head movement. That was part of talking about German V2. So in V2, which handily I still have a tree for, some phrase moves to the specifier of CP, and T moves into C. And that's why you get word orders like a prepositional phrase, "With chopsticks will she sushi eat," which is how you would say that in German. So you'd have some phrase in first position, and then whatever it is that bears tense-- so the highest auxiliary, if there's an auxiliary, or the verb, if there's no auxiliary-- ends up in C. And I just casually said, "Hey, look, this head is moving to this other head." That's called head movement. I do a lot of head movement as I'm teaching. Some of you may have noticed. And then we've also talked about NP-movement. That was the case where you took a noun phrase and moved it into the specifier of TP. So three kinds of movement then. We've got "What did you buy?" where "what" is wh-moving. We've got German. So in German, "With chopsticks will she sushi eat," where we have reasons to think that auxiliaries start at the end of the cause and head move. So here's head movement. And then the last one is NP-movement, and that was how we were going to deal with sentences like "The sushi was devoured," where we think that "devour" selects for an object, needs to have an object. But in this case, its object isn't there because it's moved over here. This is NP-movement. It's moved over here. Because of the EPP, TP needs to have a specifier, and the sushi has raised to become the specifier of TP. So wh-movement movement and head movement and NP-movement, and I can see that I've used my own private abbreviation for movements, which is mvmt. Because if you're a syntactician, you have to write the word "movement" a lot. So that's my abbreviation for movement. Yeah, so three kinds of movement that we've talked about so far. Are there any questions about any of that? Because we're about to talk about those movements and their properties in a little bit of detail. OK. So "Mary will type novels." We've talked about head movements. We haven't talked about it in this case, but here's a place where we could talk about it. There's a way to ask questions, yes/no questions, where you take whatever's in T and you move it into C. So you start with "Mary will type novels." You move T into C, and so you have the word order "Will Mary type novels?" So the auxiliary now precedes the subject. So here's an instance of head movement-- in this case, in order to form yes/no questions in English. Wh-questions, "What what will Mary type?" Not only are you moving T into C-- so "will" is moving from the position after "Mary" to the position before "Mary"-- but you're doing the wh-movement that we've talked about before. The object is moving into the specifier of CP. And then in NP-movement, "The cookies were devoured." We've taken the object of "devour" and moved it in the spec of TP, because TP needs to specify. Yeah? We've seen that not all languages have all of these kinds of movement. So there are languages with wh-in-situ. This is the Chaha example. Chaha is a language spoken in Ethiopia. It's a Semitic language. It was actually-- I teach the graduate field methods class and Chaha was the first language that we worked on. That's the one-- I think I've told you this story. That's the one in which I tried to start the class by saying "The man cooked the meat," and the guy who we were working with said, "No, I cannot say that." And I said, "Why?" He said, "Men do not cook." It was like, oh, OK. So, Chaha-- he also taught us an expression for "very quickly," which meant "in the time it takes spit to evaporate," which I thought was a cool expression to have. I wish I could remember how to say that in Chaha. I should go look it up. So Chahah is a wh-in-situ language. It leaves the wh-phrase where objects normally go. It's also a head final language. It has the same word order as Japanese. It's a wh-in-situ language. So there are languages that leave wh in-situ. They don't move it. There are languages that don't go in for NP movement, at least as much as English does. So in English, I would have to say, "The cookies have been eaten." But in Italian, you can say something that literally means "Have been eaten the cookies." So the cookies can just stay in object position. The Italians don't have the EPP. They're very relaxed about whether TP has a specifier or not, so the cookies can just stay where. They are they don't have to go anywhere. So Chaha doesn't have to move its wh-phrases. Italian doesn't have to move its NPs. There are also cross-linguistic differences with respect to where heads go. So here's an English sentence, "Mary often speaks French," where I've put an adverb. I've adjoined-- it's an adjunct. It's not selected by anything, that adverb phrase, "often." I've attached it to the verb phrase there. And then you've got a verb with its object, "speaks French." If any of you speak French-- does anybody speak French? So if any of you know any French-- a couple of you do, it looks like. Maybe you know. This isn't the right word order for French. The right word order for French involves taking the word for "speaks" and putting it before "often." You can't, in French, say "Mary often speaks French." You must literally say, "Mary speaks often French," which in English, we cannot say. So this is another basic difference between languages. French requires the verb to raise to T. Sorry, I called it I again. I'll try to fix that. French requires the verb to raise to T when there's no auxiliary, so you say "Mary speaks often French." English doesn't require the verb to raise to T. So just like English requires the wh-phrase to move to the specifier of CP and Chaha doesn't-- Mandarin doesn't, Japanese doesn't-- French requires the verb to raise to T, to move to T. English doesn't. Yes, so there are differences between languages along these lines, where one language will have one set of movements and another language will have another set of movements. That's a thing we find. It would be nice to know why and try to derive this from something else, but that's where the field is right now. I haven't talked very much about why these movements happen, and I won't, mainly for reasons of time. It's a very interesting topic. There's a lot of work on it in trying to figure out what it is that drives these movements, causes them to happen. So the only one that I've really given you a motivation for is NP-movement, which I've said is driven by the EPP, the need for TP to have a specifier, which is a need that exists in some languages, but not others. So English has that, and Italian, for example, doesn't. But I do want to talk some-- so I haven't talked about why these movements happen. But I want to talk some about some of the conditions on movement. Because what we're going to see is that movement can't always happen, and we're going to want to try to understand why. So you can say things like "I ordered a hamburger and French fries," but it would be weird for me to ask you a question like, "What did you order a hamburger and?" Do people agree? That's a strange question to ask. So it's sometimes called the coordinate structure constraint. And it's one of many examples where people have found the wh-movement example is somehow blocked. So all of the wh-movement examples I've shown you so far, it's always been possible to move the wh-phrase. There are times when it's not possible, and this is one example. So "What did you order a hamburger and?" No good. It's roughly if you have two things that are connected by "and," you can't wh-move one of them. You also can't say "What did you order and a hamburger," where you moved the first one instead of the second one. Also no good. Yeah? Yeah? AUDIENCE: The second one sounded fine, actually. NORVIN RICHARDS: I'm sorry, say it again? The second one sounded fine? AUDIENCE: I don't know. Like, "What did you order a hamburger and?" Like, I have to somehow think, oh, that is not grammatically correct. NORVIN RICHARDS: Yeah? AUDIENCE: It's just on-- like, if someone just asked me that, I would just [INAUDIBLE]. It's somehow-- no, the first on. NORVIN RICHARDS: The first one. Yeah, this one. "What did you order a hamburger and?" You mean if somebody asked you that, you'd just be like, "French fries." AUDIENCE: Huh? NORVIN RICHARDS: You'd just say "French fries." AUDIENCE: Yeah. Like, now that I'm listening to it, it's grammatically incorrect. Now that I'm playing it back. NORVIN RICHARDS: Yeah. AUDIENCE: But it just somehow doesn't sound weird. NORVIN RICHARDS: Hm. Oh, that's interesting. I mean, several other people are raising their hand, so maybe I'll just open this to general discussion. Joseph? AUDIENCE: I feel like it would be natural to put a colon after "order." I don't know what that would do, like, syntactically, but "What did you order-- a hamburger and?" NORVIN RICHARDS: Yeah, so forget about "and" for a second. I think it's possible to say, OK, "you ordered--" and have that be a question, right? So it's your job to fill in the blank for me. And I think what you just did is a version of that where I say, "What did you order?" That's the wh-question. And now I'm going to ask you to help me finish the sentence for me. I think that's what's going on. Kateryna? AUDIENCE: When you say you ordered dot, dot, dot, is that equivalent to not doing wh-movement and just not pronouncing the "what?" NORVIN RICHARDS: It's an interesting question. I don't know whether we want the syntax to handle that or not, or whether we want that to be like the kind of case I talked about before. So I-- because I think-- there's a restriction on that technique. For example, it has to be the end of the sentence. You have to stop talking before you get to the end. I can't say "you ordered and a hamburger," I don't think, right? So I think what I'm doing is simulating forgetting how my sentence was going to end, right? So it's really as though I'm about to say "You ordered a hamburger" and I'm pretending that I can't remember how to end the sentence, and I'm asking you to help me. I have a feeling that's the kind of way we want to think about that kind of example, but yeah. Raequel? AUDIENCE: When I hear, "What did you order?" at least as well, but if someone said, "What did you order a hamburger with?" I would be, like, that's fine. NORVIN RICHARDS: Yeah, yeah. Yeah, so good point. And maybe that's got something to do with your reaction, that you are being maximally charitable to the person you're talking to. If somebody said, "What are you ordering a hamburger-- what did you order a hamburger and?" you do your best to pretend that they said something grammatical. It's clear what they meant. And so you're going to pretend that they said "with" instead of "and." Maybe there's something like that going on. This is reminding me of a-- there's work on-- what's it called? I think it's called the Moses illusion, where people have observed that if you ask people how many animals of each kind did Moses take on the Ark, where if you're familiar with the Bible, you know that Moses did not take any animals on the Ark. The Ark was Noah, a different guy. Noah was the one who took animals on the Ark. And so the real response to that is supposed to be, hold on, Moses didn't take any animals on the Ark. But that if you do a survey in which you walk up to people at random and say how many animals of each type did Moses take on the Ark, what they typically do, very often, is pretend that you said Noah. So they pretend that you asked the question in a way that makes sense, and they tell you-- they tell you two. Which is actually not-- as I said, I've been reading the Bible lately because of this Wampanoag project. Noah-- I hadn't remembered this, but Noah took two of some kinds of animals, but more of other kinds. It had to do with whether they were edible or not, i think. So whether the laws for what you're allowed to eat allowed you to eat them. If it was OK to eat them, he took more, I guess so that there would be some to spare. Yes. So I suspect that that's what's going on with you. But it's an interesting question. Actually, your point raises an interesting point, which is that these-- we're going to talk about several cases of this kind where a wh-phrase seems to be constrained from moving from a particular place, and the examples are always going to be examples, hopefully, where it's not that you can't figure out what the other person means. It's not that these are unthinkable. They're just unsayable. It's not the right way to say it. Actually, the next example, I think, is particularly clear in that regard. So here are two sentences. We've talked a lot about embedded clauses. And I guess when we were talking about German, I touched on the fact that embedded clauses sometimes have it start with complementizers. Our English sentences have pretty reliably started with complementizers, but they don't have to, right? So you can say, "I think that Mary should win the election," where there's a "that." But you can also say, "I think Mary should win the election." There's no complementizer there. And then linguists get very curious about whether, OK-- I got asked this question in class at one point. Does that mean that I should draw different trees for these? Or does it mean that the embedded clause should be a CP in both cases, and that C can either be pronounced "that" or [SMACKS LIPS]. Right? That C has a pronunciation where it's just not pronounced. And that's a debate that people have. We don't have to have it. But let me call your attention to a fact which doesn't hold actually for all English speakers, but for some it does. In my English, it's OK to ask questions like, "Who do you think should win the election?" But questions like, "Who do you think that should win the election?" are no good for me. Am I the only person like that here? Is there anybody for whom these are both fine? Are you communicating with me in ASL or-- AUDIENCE: I do not know what the aslerisk-- NORVIN RICHARDS: The asterisk means it's bad. AUDIENCE: The first one feels fine. The second one feels not fine. NORVIN RICHARDS: OK, good. Yeah, so you're like me. I mean, interesting, yeah. It would be OK if that were not true, but yeah. There are people for whom both of these are fine. So occasionally, when I'm teaching this class, I'll do this slide, and I'll look out in the audience and there'll be one person who looks really alarmed, like they're hallucinating. And I'm like, you're from the Midwest, aren't you? And they're like, yes. So there's a part of America where they do this. It's something about living in big, flat areas with lots of agriculture, I guess, that it's OK to say these things. There's a lot of interesting questions about why, or what it is that distinguishes some dialects of English from others. Because it's not as if you cover this in high school, or your mom and dad punish if you say, "Who do you think that should win the election," right? That's not what happens. But for most Native speakers of English, there's this contrast. It's called the that-trace effect. Never mind why. Well, because the idea is it's impossible to have a "that" immediately followed by the place the wh-phrase came from. If you imagine the wh-phrase when it moves, it leaves traces of itself behind, I guess, is the metaphor. Yeah. This is specifically about-- I guess this came out in the way I just described it. It's specifically about extraction of subjects. So it's bad to say, "Who do you think that should win the election?" But it's OK to say things like, "Who do you think that we should vote for?" So the generalization is not you can't wh-move past a "that." it's OK to wh-move past a "that." It's just not OK to wh-move past "that" which is right next to you. And lots of interesting work on what the heck is going on, yeah. Again, let me just urge you, if you would like to know more about this, go take more linguistics classes. This is something linguists work on, trying to figure out what's going on. Another example of movement not being possible. We actually had an example like this earlier. All of our CPs so far, or most of our CPs so far have been complements of verbs. So we've said things like, "I think that she should win the election," where "that she should win the election" is the complement of "think." Or "people believe that the moon is made of green cheese," where "believe" takes as its sister, its complement, a CP, "that the moon is made of green cheese." It's also possible for CPs to be subjects, as in "That the moon is made of green cheese is widely believed." That's false, but grammatical. People agree? OK. It's not the most natural thing I've said today, but it's grammatical. Now, so there's a tree for "That the moon is made of green cheese is widely believed." We've got a CP, which is sitting in the specifier of a TP. It's the subject of "is widely believed." Now let's do some wh-movement. It's OK to ask questions like, "What do people believe that the moon is made of?" so we took that first sentence and we wh-moved. We turned "green cheese" into "what" and wh-moved it. "What do people believe that the moon is made of?" Fine. But you can see where this is going. "What is that the moon is made of widely believed?" Ow, yeah? So, you know. "That the moon is made of green cheese is widely believed," OK, that's possibly a little awkward. Not something I'm likely to say in casual conversation. But the last example is just word salad, to use the technical term. Yeah. It's very unclear what the heck it's supposed to mean. Do people agree? Again, I'm reporting my memory of judgments that I had when I was young and carefree, but I'm pretty sure this is true. You just can't say these things. Yeah? OK. All right, so these kinds of examples are all just meant to show you there are some kinds of things that you can't wh-move out of. So if you coordinate two things, you can't move one of them. The that-trace effect is a condition on extraction of subjects specifically. You can't move those if there's a "that" right to their left. And if you have a clause which is a subject, like "that the moon is made of green cheese is widely believed," you can't wh-extract out of that. There's a metaphor that's often used for these kinds of restrictions. The things that you can't move out of are called islands. So this last one I've identified for you is a subject island. So I guess the idea is supposed to be that if you're a wh-phrase and you're on an island, wh-phrases cannot swim. There is no boat, there is no bridge. If you're on an island, you're doomed. You cannot get out. You're stuck. So there's a thriving literature in syntax in which people identify islands and try to figure out why the things are islands that are islands. We hope that this will teach us some things about the mechanics of extraction. Yeah. I want to show you a particular class of island effects, or cases where movement is impossible, and show you a little bit of the work that people have done to try to make some progress on why these particular things block extraction. I think we have time to at least get started on this, and then we might have to finish it next time. So there are a number of kinds of restrictions on movement that could be unified, and I'll show these to you in a second. They could be unified into a single condition, which we could call shortest move. It says if you have a choice between two different movement operations, you should pick the one that's shorter. And people are kind of excited by that, because it sounds like cognitively plausible. If you're trying to decide between movement operations, you should pick a short one over a long one. I'm going to show you a definition of "short" in a second. Please don't go tattoo this on your arm or anything. There's no-- it's not all that important that we define it carefully for the purposes of what we're going to do in this class. In theoretical work on this topic, it is important to define it carefully, and there's work on trying to figure out exactly how to define it. But here's one definition. You could say, take the path-- take a movement operation, and we'll consider-- let me show you a movement operation. So here's a movement operation where we took "with chopsticks" in this V2 example and we moved it out of the TP and into the C. And we'll talk about the path of a movement, and that'll be the set of nodes that dominate the original position, the position you moved out of, and don't dominate the landing site. So in this particular case of movement, this CP dominates the landing site. But then there are a bunch of other nodes like this one and this one, and a bunch of other ones inside this TP, that dominate this XP here. And those are what we'll refer to as the path. And to claim that a move has to be as short as possible is to claim that you want the path to be, well, as small as possible. And there's some interesting work on trying to figure out what happens. So in the cases I'm going to show you, the two paths are in a subset relation. One of them just contains a subset of the nodes that are in the other one. And there's an interesting question about what happens if they just overlap? Do you actually count nodes? People sort of hope not, and it looks like the answer might be no. We won't look at any of their relevant cases today. So movement A is going to be shorter than movement B if the path of A contains a smaller set of nodes than the path of B. I'll show you some trees that will hopefully make that clear. So let me show you a case where shortest move is useful, something called the head movement constraint. It was invented by Lisa Travis, who's a syntactician who now teaches at McGill University up in Montreal. The head movement constraint says this. Here's a case of head movement-- I showed it to you before-- where we've taken what's in T and we've moved it into C. So "will Mary type novels," we've taken the auxiliary and we've moved it into C. Suppose instead we were to take the verb and move it into C. Well, we would end up with "type Mary will novels," which is not the way you ask yes-no questions in English, no. And in fact, this generally seems to be true, that if you are going to do head movement-- so you're going to move some head into C-- the head that you move is the higher head. It's "will." It's not the lower head. It's not V. The path-- so if we're talking about this in terms of paths, the path from I to C-- sorry, from T to C. I have to go through and get rid of these I's. This I, again, is an old name for T. The path from T, from "will," up into C consists of T bar and TP. Those are the nodes that dominate T and don't dominate C. But the path from V to C consists of VP and T bar and TP. Do people see that? Should I draw that tree again down here, and I can circle nodes and we can look at them? So when we're talking about paths-- again, this is one way of measuring lengths of movement operations-- the path from V to C is the set of nodes that dominate V, that don't dominate C. So those nodes are the VP, which immediately dominates A, and the T bar, which dominates the VP, and the T which dominates the T bar, but not the CP, which also dominates the C. So we're looking just at the nodes which dominate the place the movement started and don't dominate the place that it lands. So that's the path from V to C. And that path is a superset of the path for movement from T to C, because VP is in that path, but it's not in the path from T to C. Does everybody see that? Is that clear? OK. We're mostly going to know short when we see it, right? So another way to say all of this would have been to say, hey, look, the arrow that connects T to C is shorter than the arrow that connects V to C. There, right? Or, hey, look, TC commands V. So every node that dominates T dominates V. That would have done it too. There are various ways to do it, but this talk in terms of paths, this is one way people do it sometimes. So head movement constraint is one of the sub cases. So the head movement constraint was invented decades ago. It's one of the cases that was folded into this general notion of shortest move. When you have a choice between things you could move-- in this case, you're choosing between T and V-- you have to choose T. You can't choose V. It's what the head movement constraint says. Another example. We talked a little bit about multiple-wh questions, questions like "Who bought what?" And speakers vary. Actually, I'd be interested to hear how you guys feel-- how you all feel. I prefer "Who bought what?" to "What did who buy?" or "What did you give to whom?" sounds better to me than "Who did you give what to?" Do people agree with me about that? Does that sound true? I saw some of you nodding encouragingly as I was doing this. Maybe you just wanted to be friendly. I appreciate that. Yeah? AUDIENCE: I prefer the first options, but I don't think that the second options are ungrammatical. NORVIN RICHARDS: Yeah. So this is different from the head movement constraint in at least that regard. The head movement constraint-- if you violate the head movement constraint, you've done something very bad. It's no longer clear what you meant, yeah. Whereas if I say, "What did who buy?" people will look at me funny. But people look at me funny anyway. That's kind of what life is like, yeah? Still, I also agree with Kateryna that the first examples are better than the second examples. This is sometimes called superiority, the phenomenon that if you have two wh-phrases and you're trying to pick which one to move, there's a preference. As Kateryna, I think, accurately says, it's not a hugely strong preference. But it's a preference for moving the higher one, the first one. And this is kind of like the head movement constraint, and people have subsumed it under shortest move. You've got a choice between two things to move. You ought to move the higher one, the one where for which the movement will be shorter. Yeah? AUDIENCE: Could this be related in some way to what the most important part of the sentence is? Like, if you hear someone say something like "We're buying something," and they're buying this really crazy thing, you could say, "What did you buy?" NORVIN RICHARDS: Ah, yes. Yeah. Let's see now. Huh. I'm sorry, I'm going to do an old professor's trick and talk about something different from what you talked about, because what you said reminded me of something else. And then I'm going to talk about that thing hopefully long enough so that you'll forget your original point. That's the goal. So your job is to stop me from succeeding at this. But what I thought you were going to say went like this. There is a kind of wh-question-- we've talked about it. It's called an echo question, where I just repeat what you said, except that I substitute in a wh-phrase which is in-situ. So if you say "Mary just bought a motorcycle," I can say, "Mary bought what?" And similarly, if you say, "Mary just bought a motorcycle," I can say, "Who bought a motorcycle?" And because that's at the beginning of the sentence, it's hard to know whether that's an echo question or not. But at least it's possible that it is. And I can do that either because it's hard to believe-- there are various people I could believe bought a motorcycle, but not Mary, we all know Mary would never buy a motorcycle-- or I could do it because I didn't hear the first word of your sentence, like the phone connection is bad or we're talking on the T or whatever, and there was a loud noise right when you said "Mary," and so I didn't hear that part. And I can say, "Who bought a motorcycle?" Not because I'm astonished, but because I missed the first part of your sentence. And "What did who buy?" that string, I think it's possible to have an echo question with wh-movement in it. So if you've just asked me, "What did Mary buy?" and I didn't hear you say "Mary" because the line was bad. So you said, "What did [IMITATING STATIC] buy?" I can say, "What did who buy?" Which resembles what you were talking about, but it's not quite the same. You wanted-- so that's a place where my two wh-phrases have different goals. When I say, "What did who buy?" I'm really asking you the question "who," and my "what" is quoting you. What you said was "What did [IMITATING STATIC] buy?" right. That's not what you said, but that's what I heard. And so I'm doing an echo question about your wh-question. Yeah. My real question is about "who." Your idea was that if I ask, "What did who buy?" "What did who buy?" No, your question was, "What did who buy?" What was your question? AUDIENCE: Like, if you're more interested in the what than who. NORVIN RICHARDS: If I'm more interested in the what than the who. So this is related-- yeah. This is related to other things. Let me see, what's a good example of this? It's a thing about wh-answers to wh-questions. if I ask you a question like, "Who bought what?" I want a complete list of people. I want my answer organized as a complete list of people, together with the things that they bought. Whereas if I make myself ask you, "What did who buy?" what I want is a complete list of things organized by things together with the people who bought them. And there's some debate about whether-- so under normal circumstances, wh-questions are supposed to be exhaustive. Answers are supposed to be exhaustive, that is. If I ask you, "What did you eat yesterday?" you're supposed to give me a more or less complete list. If you leave out all the desserts and I'm your doctor, then you are you're doing something wrong. You're supposed to tell me everything. There are circumstances under which it clearly isn't supposed to be exhaustive. So if I ask you a question like, "Where can I buy a foreign newspaper around here?" you don't have to give me a complete list of the places I can buy a foreign newspaper around here. You just have to tell me a place where I can go. "Harvard Square," you can say. Maybe that's where I can go. And with multiple wh-questions, there's some debate about whether the exhaustivity requirement, to the extent that it ever holds with wh-questions, maybe only holds of the one that you move and not of the one that you've left in situ. That's a position people have held anyway, that there's a difference between the two. And so your question about importance might be getting at that, right? It's like, I want this one completely exhausted, and then-- yes, I want every member of it connected to something in the other group, but you don't have to be exhaustive with the pairings. That might be why-- I think this was Kateryna's comment earlier that the violations of this are not as strong as the violations of the head movement constraint. And it might have to do with things like this, that there are extraordinary circumstances under which these effects might be overwhelmed by your desire to have-- to prioritize one or the other. Whereas with head movement, there's nothing like that going on. It's just got to move ahead. One of the cases that people talk about when they're talking about exhaustivity is cases where you know that there's only a single pair. So if I'm eating lunch with a colleague, if I've invited a colleague to have lunch with me and we go and have lunch, and then I try to pay the bill and my colleague is arguing with me about whether-- about who gets to pay, I might say, "Well, who invited who?" Right? I'm not going to say, "Who did who invite?" And maybe it's-- I think. That's a case where the judgment is pretty clear. And I think that might be a case where there's no question of exhaustivity. There are only two people, right? It's either I invited you or you invited me. Those are the only possible answers. And so maybe that's why the judgment, I think, in that kind of case gets stronger. Yeah. Sorry, did I ever manage to answer your question at any point of that? Yes, good. Yeah. This is all-- you're opening a can of worms, which I will now close. There's lots of interesting work on exactly this problem, yeah. And we are just about out of time. Yes, we're out of time, because this one will take too long. So we'll do more shortest move next time. We'll talk more about this. Cool. Thanks. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_13_Syntax_Part_3.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: All right. Let's progress, then. So last time we got started drawing trees for sentences. I spent a lot of time with verb phrases drawing trees for more and more elaborate verb phrases and ignoring repeated requests that I go ahead and diagram the rest of the sentence already, and I finally did. So we had sentences like-- I think this was not one of them, but this is like the ones we were talking about-- "She will tickle the child." I said we're going to do the construction of sentences the same way we did the construction of words back when we were doing morphology. We're going to use this operation "merge" which takes pairs of things and puts them together to make new things. So we'll merge "the" and "child" and get a new thing, which we'll name a noun phrase after the fact that it crucially contains a noun. And we'll merge that noun phrase with the verb "tickle" and that's how we get the verb phrase. And then we have these two other words in here, "she" and "will." And "she" is a noun, a special kind of noun. It's a pronoun. And "will," I made up a name for "will." I called it a "T." T for tense. Actually, that gives me too much credit. Other people gave it that name, I'm just transmitting to you that name. Yeah. And then we said, if we want to make a tree for the whole sentence, what we'll do is first we'll merge T with the verb phrase. So we'll merge those two things and we'll give the resulting object the label T. And then we'll merge that new object that we've created with the noun. And since, maybe you remember, "phrase" is just a name for the largest thing with a given label, that daughter of TP that we've got there, "she"-- yeah, it's a noun, but it's now also a noun phrase because it's the highest thing with its label. That noun phrase and its sister, that T bar, become the daughters of this new node that we've created, TP. I said that's the way we're going to build sentences. Immediately several of you objected. Kateryna here, for example, ruthlessly demanded that I explain why we were doing it in that order and not, for example, like this. We've got "tickle the child" just like before, but this time we'll merge "she" with "will" first. It'd be a projecting T. And then we'll merge the result of that with "tickle the child," giving you a TP again-- [COUGHS] Excuse me. Giving you a TP again-- that didn't help at all, did it? But with a different constituent structure. Several of you wanted to know, why are we not doing that? Yeah. And I attempted to convince you not to do that with the tools that we have available, the constituent structure tests that we've developed in like a day and a half of syntax. And I was not able to do that because I had not shown you sufficiently sophisticated constituency tests. So I began on the board showing you another constituency test, and I promised you that today I would use that test to demonstrate that we need the tree on the left and not the tree on the right, and that was a dramatic cliffhanger last time. And then I attempted to distract you from this by talking about other stuff for a little while. That's more or less where we were last time. Yeah? OK. So now let me attempt to distinctly show you that we should have the tree on the left and the tree on the right. And I should tell you that although I'm going to give you a piece of evidence for that today, the evidence is going to come in in bits over our discussion of syntax, which is going to last a little while. So I'll show you one piece of evidence now and if you don't like that piece of evidence, well, just wait. There will be more. But what I'm going to do today, thanks to Kateryna, is rush something that I was planning to tell you about much later. So if you don't like what I'm about to do, blame her. This was all her idea. OK? Yeah? OK. So here are some data that we talked about last time. Different sentences, I think, but the same general idea. Observation. These are both grammatical English sentences: "She would recognize Mary" and "Anyone who knows her would recognize Mary." They're both English sentences. But there's an interesting difference between them. They both contain pronouns and they contain the name Mary, but they differ in that the first sentence, "she" cannot be Mary. It has to be some other person in that first sentence. So that first sentence has to involve two different people-- one of them female, and Mary is probably also female, though who knows. Maybe not. Yeah? On the other hand, in the second sentence, "Anyone who knows her would recognize Mary," it's possible that "her" refers to Mary. Those were the data that we collectively decided on last time when we were talking about examples like this. And I showed you the second kind of example partly-- I think somebody hopefully offered me a theory of why the first one was bad. It was the theory I was hoping they would offer where they said, yeah, maybe what's wrong with "She would recognize Mary"-- what's wrong with having "She" refer to Mary is that Mary is later in the sentence. Pronouns refer to other people, but they have to be people who we've already been talking about or something like that. Totally sensible, reasonable theory. You might expect it to work that way, and it doesn't. The second sentence shows that it's possible for pronouns to refer to things that are later in the sentence. Yeah? Cool. So how are we going to distinguish these two theories from each other? That was the cliffhanger last time. I said we can distinguish these theories or these examples. So that's just the data I just ran through. "She" is not Mary in the first sentence, but "she" can be Mary in the second sentence. It doesn't have to be, but it can. All right. Now, I'm going to show you how we'll distinguish those kinds of examples using the tree that I was hoping you would just unthinkingly accept when I built it for you. And then what I'll do is show you that the theory that I'm going to show you for this pair of sentences wouldn't work if we had the other kind of structure. And then if you want, we can talk some about whether there is another way to talk about the other kind of structure. Now, my abilities as a graphic designer-- you may have picked up on this seeing my slides-- are limited. So I was not able to come up with a way to put all of the trees that you might like to be able to look at all at once on a single slide, so I'm going to put some of them up here on the board instead. We'll see if we can make reference to them. So she would-- here's the tree that people were offering as an alternative. Yeah. This is the kind of tree that people were offering as an alternative. So the tree that's on the slide is the tree that I want you to believe in. The tree that's on the board is the tree that people said, "Hey, why are we not doing it that way?" Right? I always feel nervous when I write things on the board that I want you not to believe. So just so we're clear, this is wrong. Not the right tree. That's the right tree, yeah? But I'm now going to try to convince you of that. So the reason it makes me nervous is that I know it's morning, loosely construed. It's late enough in the semester that you're probably sleep deprived. It's easy to think, well, if the professor wrote it on the board it must be true, yeah? This is not true. At least, I don't think it's true. I'll try to convince you. OK, so that tree up there. If we use that tree up there, we can account for the contrast between the two sentences on the slide-- [COUGH] My gracious. We can account for the contrast between the two sentences on the slide with the following principle. It goes like this. If you have an NP like "she" that merges with another node-- let's call it alpha-- the NP can't refer to any of the names that are inside alpha, any of the names that are dominated by alpha. I'll keep it specific to names for now. We're going to talk much more about principles like this later. We're going to develop a theory of what kinds of things pronouns can refer to and what kinds of things they can't. But this works for the pair of examples that we have here. So what's wrong with this example-- why can't "she" refer to Mary in this example? The theory would be it's because you merge "she" with that T bar there, and that T bar dominates "Mary." And therefore, this principle says "she" can't refer to anything that's inside that T bar and "Mary" is inside that T bar. That's what this principle is meant to do. Let me show you that that distinguishes between the two examples. So if instead we had "anyone who knows her"-- if we had "anyone who knows her" as the subject, "Anyone who knows her would recognize Mary"-- well, I'm going to try to get away without actually telling you how we're going to diagram all of "anyone who knows her." But "anyone" is the noun in "anyone who knows her," and it's a noun that's modified by what's called a relative clause. So "who knows her" is a kind of clause that modifies nouns. And I have given it the label question mark, question mark because someday we'll talk about relative clauses and talk about what label they ought to have and all that good stuff. But for now, it's a thing. Yeah. And what's inside it is that pronoun, "her," which got merged. Somewhere inside that relative clause is a verb phrase, "knows," and the noun phrase, "her." So if we ask, what was this noun first merged with? Well, it definitely wasn't merged with T bar. It was merged with something inside the relative clause. It was merged with this verb. Yeah. Yeah? So this principle that says, if a noun phrase merges with another node, alpha, the noun phrase can't refer to any names that alpha dominates. That distinguishes between "She would recognize Mary," where "she" is merged with something that contains "Mary," and "Anyone who knows her would recognize Mary," where "her" is not merged with something that contains "Mary." Do people see that this principle draws the distinction that we wanted to draw? OK. Giray? AUDIENCE: [INAUDIBLE] somewhere in the subtree or does it have [INAUDIBLE]? NORVIN RICHARDS: No, that's more or less what it means. So trees-- you think of the basic relation in trees as "immediately dominate," so a node immediately dominates the nodes that were merged to create it. And dominate is the transitive closure of immediately dominate. So you dominate the things you immediately dominate and the things those things immediately dominate, and so on until you run out of immediate domination relations. That's what "dominate" means. To put it more graphically, you dominate the things that are related to you by lines pointing down. That's what this means. Raquel? AUDIENCE: So did we agree that the "her" in this thing can refer to Mary potentially or are we saying specifically not? NORVIN RICHARDS: I asserted that it could. AUDIENCE: So it's that the NP contains "who knows her" and it's merging with the T bar. The T bar dominates "Mary." Then does that not contradict the new rule we just learned, or is it like the who knows part that combined with? NORVIN RICHARDS: Yeah. So here, let me draw "anyone who knows her" a little more carefully. So here's "anyone." Here's the relative clauses that I'm going to continue possibly foolishly not to label, so there's some stuff in here. And then, "who knows her," there's maybe a TP down in there containing a T bar and a T. And who knows, maybe the subject of it is "who." Right? So somewhere inside-- so there's "anyone" and then there's this relative clause which contains a sentence, "who knows her." That's what relative clauses are. They're a special kind of sentence that acts as a modifier for nouns. So you're right. And then this thing is becoming part of a larger TP. "Would"-- so now I'm just drawing what's on the slide with a little more detail-- "recognize"-- bless you-- "Mary." So now there are-- apart from "Mary"-- well, OK. There are three noun phrases in this sentence. There's "Mary," there's this thing, and there's this pronoun, "her." Yeah? And claim, a noun phrase can't refer to something that is contained in the alpha with which the noun phrase merges. So that's a claim that says when we ask, who does "her" refer to here? Can it refer to Mary? The answer is, yeah, because this was merged with "knows." "Knows" doesn't contain "Mary" and so we're all set. AUDIENCE: So your ancestors merging doesn't impact that? NORVIN RICHARDS: Yes. No, you're not responsible for the crimes of your ancestors. That's right. So this noun phrase shouldn't be able to refer to Mary. Yeah? And that might be true. Yeah. Yeah. Yeah? OK, so this has been an attempt at an argument form, argument by successful account. So I've shown you how, with this principle and these trees, the trees that I had on the slide, this kind of tree where a TP contains three things. There's a T, the sister of the T is the verb phrase, and then the daughter of the TP is the subject of the whole thing. That's the kind of tree that I was drawing you. Or to say that in another way, when you're assembling a sentence out of those three things, what you're doing is you're first merging T with the verb phrase and then you're merging the result of that with a noun phrase. That was the order in which I wanted to do things. I said, what I've tried to show you is that if we do things in that order we can have an account using this principle that I've just invoked for this contrast between these two sentences, the fact that "she" can't refer to Mary in the first sentence, but that "her" can refer to Mary in the second sentence. So this is, as I said, argument form, argument by successful account. If you buy these trees then, hey, we get this account of this contrast between these sentences. Yeah? Now suppose we were instead to do this kind of tree. Well, good. But notice at least the principle that I posited here wouldn't work for this kind of tree. So the principle that I've posited here says if a noun phrase is merged with some alpha the noun phrase can't refer to anything inside the alpha. Well, this noun phrase, "she" has just been merged with the T "would." So that principle at least would not draw this distinction. Possible response could be, yeah, but now we need another principle. To which the response is, yeah, well, what principle? Show it to me. So what we're going to do is we'll have this tree and we're comparing this tree with-- I'll draw it again-- "Mary," "recognize," "would," and now we're going to have "anyone," blah, blah, blah, "her." So we want this "her" to be able to refer to Mary but we want this "she" not to be able to refer to Mary. So there. That's part of an argument, anyway, that you should buy the kinds of trees, do the mergers in the order that I wanted you to do them in. We have a name for this relation that holds between "she" and "Mary" in this kind of tree. The relation that holds between a node like the she-- the noun phrase "she" in this tree and another node like the noun phrase "Mary" in this tree, which is contained-- I'm sorry. This is too complicated. Let me introduce some variables. We have a name for the kind of relation that holds between a node like the noun phrase "she"-- let's call that node x-- and another node that is contained in something that x was merged with. So the relation that holds between "she" and "Mary." Yeah. Or for that matter, between "she" and the verb phrase or between "she" and "would." Yeah? So the relationship that holds between a node x and everything that's inside the node with which that x was merged. Yeah. That relation is called C command. We're now not going to talk about C command for a little while because, as I said, you're experiencing a moment of anachronism. I'm telling you things that I was planning to tell you in a few weeks but because you demanded it, I'm offering you an argument for this particular way of assembling trees. Yeah. OK? All right. All right, good. Yes. Alternative way of drawing trees wouldn't let us use this explanation. You would need a different one, and then if you like that alternative way of drawing trees, it's your job to come up with a different one. Kateryna, yes, has already come up with one. AUDIENCE: I just want to clarify. Does C command refer to that principle that we were talking about? NORVIN RICHARDS: No, I'm sorry. Let me say it better. There's a definition for C command. AUDIENCE: C command is the relationship? NORVIN RICHARDS: C command is the relationship. So you say "x C commands y." There are various ways to define C command, but one of them-- here, I'll show you one and then we can talk about another one. You can say "x C commands y if every node that dominates x dominates y," and people usually add "and x does not dominate y." So this is one way to define it formally. What does that do? It says if you look at the noun phrase "she," what the noun phrase C commands is everything that is dominated by everything that dominates "she," the noun phrase "she." So what dominates the noun phrase "she"? Well, it's just the node TP. And so "she" C commands everything that's dominated by the node TP, which is basically everything except for TP. So T bar and T and the verb phrase. The T would C commands the verb phrase and also the verb and the noun phrase "Mary." Yeah. To put it yet another way, you C command your sister and everything your sister dominates. That's what C command is. Lots and lots of syntax makes reference to C command, and so we will keep coming back to it. It's a thing that will come up again. OK? So again, this is all dramatic foreshadowing or anachronism or whatever you want to say-- whatever you want to call it. These are things I was planning to teach you a little bit in the future, and so now you have a jump on in life. Later on some things will suddenly become familiar. Yeah. Yep. All right, good. I have been teasing you for making me do this but I really appreciate it, actually. It's a real pleasure teaching people who disagree with me all the time. When I'm on airplanes and I can't avoid telling the person next to me what I do for a living and I tell them I teach linguistics at MIT, they're like, oh-- sometimes they say, "Oh, I didn't know they had linguistics at MIT." I'm like, "Yes, yes!" And then they say, "So what's it like teaching MIT undergrads linguistics?" And I say, "It's great because they all have these math and science backgrounds so they don't believe a word I say. It's terrific. So I go in there and I tell them things and they're like, well, no, show me some proof. I go, well, OK. In another university I think I would be able to get away with just the authority of having a beard this length. People would just believe me, but no. It's great." OK. This is a slide we saw before. This is just to remind you of a bunch of terminology. Some of it I've been using as we've been talking. So just to remind you, when we're talking about trees, standardly use feminine kinship terms to talk about relations between parts of trees. So we talk about sisters, which are two things that are both-- two things that were merged to form a node. So for example, the D, "the," and the N, "child," those are sisters. The V, "tickle," and the noun phrase, "the child," those are sisters in this tree. We talk about mothers. So the noun phrase "the child" is the mother of D and N, "the" and "child." We also talk about daughters. So "the" and "child" are the daughters of the noun phrase. And I think I said last time that's it, so we don't talk about grandmothers or aunts or anything else. And then and this came up when Giday asked a question earlier. The basic relation that's reflected in these trees is the relation of immediate domination. That's the relation that's created by the operation "merge." So when you merge two things, you create a new thing that immediately dominates the two things that you merged. That's just our name for that relation in the tree. And then there is another notion, dominate, which is the transitive closure of immediately dominate. So you dominate the things that you immediately dominate and the things that those things immediately dominate and so on, all the way down to the tree. And then we said there's this word "constituent," which we've been using for some time, and all of you seem to have a handle on it anyway. But just to give it a formal definition, we'll say that something is a constituent if there's a single node that dominates all the words in alpha. So "the child" is a constituent because there's a node that dominates just those words, the noun phrase. "Tickle the child" is a constituent because there's a verb phrase dominating just those words. "Will tickle the" is not a constituent because there's no node dominating just those words. Is that clear? This is all just terminology, and I'm telling you about it because you're going to hear me use it. In fact, you already have. OK. All right? So the way we've been talking, then, we take pairs of things, we merge them to create new things. And we haven't said very much. In fact, we haven't said anything at all about why you pick the particular things that you pick. So I want to talk about that a little bit. Here are some sentences mostly involving ants, I guess, and anteaters, And they're all grammatical sentences of English, I think. "The ants thrived" and "The anteater arrived"-- this is kind of a little dramatic story about some ants and their difficult lives!-- "The ants thrived." "The anteater arrived." "The anteater devoured the ants." And "Mary slapped the anteater." A little sad story about ants, or a happy story about an anteater. It depends on whose side you're on, I guess. Yeah? Observation, old observation. Verbs, in lots of languages, maybe all languages, can be fruitfully thought of as coming in at least two types. There are verbs like "thrive" and "arrive" which need to not have objects. So you can say "The ants thrived" or "The anteater arrived." You cannot say "The ants thrived the ant farm" or "The anteater arrived the anthill." You can't put an object after those kinds of verbs. And then there are other verbs, like "devour" and "slap," that are the opposite. You can say "The anteater devoured the ants." You can't very well say "The anteater devoured." That sentence is not finished. Yeah. So really "Mary slapped the anteater." You can't say "Mary slapped." Did I hear noises of disagreement? Facial expressions of disagreement? AUDIENCE: "Mary slaps." NORVIN RICHARDS: Sorry? Apart from the technical term, yeah. Apart from that. Yeah? OK. So there are verbs that seem to want to have objects and there are verbs that seem to want to not have objects. Classic observation. The name for that classic observation is that there are verbs that are what's called transitive. Those are the verbs that want to have objects, verbs like "devour" and "slap." There are other verbs that are intransitive. Those are verbs like "thrive" and "arrive." I picked these verbs fairly carefully. There are many, many-- possibly most verbs-- that can be either one. So for example, you can say "The anteater ate the ants" and you can also say "The anteater ate." It's fine. Yeah? So some verbs are transitive, others are intransitive. Maybe there are also other verbs that are indecisive. Yeah? AUDIENCE: In that context, how is "ate" different from "devoured"? NORVIN RICHARDS: Good question. Yeah. But I think the fact, the facts are as I just described them, right? That "The anteater devoured" is not a complete sentence, but "The anteater ate" is. "Devoured" and "ate" mean slightly different things. You have to be louder and messier and more violent if you're devouring, I guess, and "eat" is a more plain vanilla verb. Why that means that one of these verbs has to be transitive and the other doesn't is a really good question. Yeah. Yeah. Other questions? To which I may, again, only be able to say, yeah, that's a good question. Yeah? AUDIENCE: I was just going to follow up and then say, are we going to answer that question? NORVIN RICHARDS: No. No, no, no. No. No. No, we are not. Other questions? I enjoy listening to your questions even if I can't answer them. OK, so some verbs are transitive, others are intransitive. So that's a classic observation. People have been saying that since the Romans and the Greeks. Less classic observation is that there are other kinds of verbs that not only need an object but also need a prepositional phrase. Actually, there are verbs that just need a prepositional phrase. We'll talk about that maybe later, but let's talk for now about a verb like "put." "Put" absolutely has to have an object, so it's transitive in that sense. But it also has to have a prepositional phrase. So you can say "The anteater put the ants onto a plate." This is a comparatively civilized anteater. But you can't leave out either the object or the prepositional phrase. You can't say "The anteater put the ants." That's not a sentence. And you also cannot say "The anteater put onto a plate." Also not a sentence. So "put" needs to combine with both an object and a preposition. So apparently, in-- I hope this is what I say on the next slide. Yeah. Remember when we were talking about what we would need to state in our lexicon about each word or about each morpheme, if we're putting morphemes in our lexicon, which we were saying probably we should? We were saying, yeah, When you list an entry in your lexicon you're going to need to say whether it's a free morpheme or a bound morpheme and whether it's a prefix or a suffix or an infix or whatever all else, and how it's pronounced. And we spend some time doing phonology and convincing ourselves that saying how something is pronounced might be more complicated than it looks. Here's something else that we're going to have to state. When we list a verb we're going to need to state something about-- this is called selection. So we need to say verbs, for example, select for the kinds of things that they want to combine with. So our list, our lexical entry for a verb like "devour" has to say, oh yeah, this is a verb that needs to have an object. I think we need to say that. This gets back to your question of a second ago, right? Can we derive it from anything else about the meaning of "devour" that it has to have an object? Can we make that follow from something? Rather than just having to say next to "devour," oh yeah, this one needs an object. Can that be a general property of certain classes of verbs? That's an ongoing project. There are linguists trying to figure out the answers to questions like that. But yeah, for 24.900 we'll just state for every verb, this verb needs an object. This is kind of like-- and it should look a little familiar-- when we were doing "unlockable" and other polymorphemic words like that we were saying, yeah, there are affixes that say, "I want to combine with a verb," let's say, and "I'll give you an adjective as a result." Similarly, verbs have to specify what they merge with. So the verb "put" is pronounced "put"-- I should really have done that in IPA. It's pronounced whatever. "Put." And it means something. It means to cause something to be in a place, whatever, however we're going to state its meaning, and it selects for a noun phrase and a prepositional phrase. That's the way we'll say that. So the verb put needs to merge with those things. OK? Actually, be a little more specific about how selection works exactly. So I just said-- yeah, is this much clearer so far? So let's be a little more specific about what selection is and what exactly you're selecting for. I just said there are verbs like "put" that select for an object and also a prepositional phrase. There are also verbs that only select for a prepositional phrase. "Depend" is one of those. So "depend" can combine-- kind of needs to combine, it depends, but at least can combine with a prepositional phrase, but not just any prepositional phrase. It needs to combine with a prepositional phrase in which the preposition is "on." You can depend on things. You cannot depend from things or at things or near things or by things. You can't depend near something. That's not how you use the word "depend." If you learn other languages-- especially if you learn other Indo-European languages because Indo-European languages seem to be especially fond of prepositions, I don't know, Indo-European languages, the language family that English is part of, we have more prepositions than languages really should-- And if you learn a new language, there's this weird mapping from one language to the next where you just need to learn which prepositions. It's not simple. You don't get to learn "This preposition is the translation for that preposition." You learn things like, "This is the preposition that means 'at' for cities and large islands but for small islands you should use this other preposition instead." Not to scare you or anything, but if you want to learn other Indo-European languages you should brace yourself for things like that. If you don't want to do things like that, learn a language that's not Indo-European, because outside Indo-European it's mostly like, "Here is our locative preposition. Please use responsibly." But English is an Indo-European language, and so we have stuff like this, verbs like "depend" that say, "I want a prepositional phrase and I want the preposition to be 'on.' Can't be anything else." So apparently it's possible, at least in some kinds of cases, for a verb to select not just a prepositional phrase, but a prepositional phrase with a particular head. Where by head I mean the preposition-- the thing that the prepositional phrase is named after, the smallest thing with the label P. So that's the kind of selection you get. There are other imaginable kinds of selection that you don't ever get. So prepositional phrases can be modified by adverbs. So you can say "She put them under the tree." You can also say "She put them right under the tree," "She put them directly under the tree," where those adverbs are modifying the prepositional phrase "under the tree," saying something about the spatial relation between those things and the tree. Yeah? But we're never going to find a verb that selects for a prepositional phrase. So there are verbs like "depend" that select for a prepositional phrase where the preposition has to be something, like "depend" needs "on" We're never going to find a verb that selects for a prepositional phrase and it needs to be modified by a certain kind of modifier. We'll never find a-- I made up a verb, "glorf." Yeah. And "glorf" says, "I need a prepositional phrase. I don't care what the preposition is. You can say 'She glorfed them right under the tree,' 'She glorfed them right through the tree,' 'She glorfed them right beside the tree,' I don't care what the preposition is, but we need 'right' at the beginning of the prepositional phrase." There are no verbs like that in any language. Selection doesn't work that way. How does selection work? If you're selecting for a prepositional phrase, you're selecting if you have any restrictions on the kind of prepositional phrase you want. They are restrictions of the form "I would like the preposition to be (this)." Sometimes they are restrictions of the form "I would like the preposition to be (this class of prepositions)." So I might have a slide about this later too. "Put" can combine with a bunch of different prepositional phrases. The preposition can be "under" or "on," "beside," whatever, but you can't say things like "She put them during the party" or "She put them despite the rain." So there are a bunch of prepositions that don't go well with "put." It's basically "put" needs a prepositional phrase that is a location. So that's a class of prepositions but it's not every imaginable preposition. But there are no verbs-- there's no "glorf," there are no verbs that say, "I want a prepositional phrase. I don't care what the preposition is, but I want it to be modified by 'right,'" or "I want it to be modified by 'directly,'" or even just "I want it to be modified." There aren't verbs like that. So it looks as though heads, like verbs, get to select for phrases, and they-- this is an old observation of Noam Chomsky's in a '65 book of his. He said, looks like there are all these cases where a verb is not only selecting for a certain kind of phrase, but it's selecting for a certain kind of phrase with a particular head or with the head of a particular class like locative. I want a locative preposition like "put." Yeah? Sound great? OK. Cool. So once we believe this, once we recognize that this is true, that this is how selection works, that verbs get to select for things they want to merge with-- not just verbs, we'll see. All kinds of things get to select for things that they want to merge with. And if they are picky about the nature of the thing that they merge with, what they are picky about is the head of the thing that they merge with, the thing that the phrase is named after. So the preposition, in the case of a prepositional phrase. So once we recognize that that's a thing of the "depend on" phenomenon, then we can start using it as a way of looking around for other heads when we see that kind of relation of pickiness. If you have this verb then you had better have this kind of word here. Then we get to suspect that there's a selection relation there because now we know that that's how selection relations work. So here's another example of the same kind. "I think that I have won the lottery." Fine. That's actually kind of pleasant to think about. "I wonder whether I have won the lottery." Also fine. But "I think whether I have won the lottery." No. I cannot say that. And also "I wonder that I have won the lottery." Cannot say in modern English. There are older versions of English in which you could say things like this. It meant "I am surprised that I have won the lottery." But in modern English the first two sentences are OK, the second two sentences are bad. Do people agree? Don't let me run roughshod over your English judgments, but I think this is true. So here's another place where there's this relationship of pickiness. So we know that if you have a "depends," you must have a prepositional phrase and the head of the prepositional phrase must be "on." Here we seem to be seeing that with "think" there can be a clause after it, and then there's a particular word that starts that clause which must be "that" and cannot be "whether." And we're seeing that "wonder" seems to be selecting for the clause that comes after it, and it's picky about the properties of the first word in that clause. It can be "whether" but it cannot be "that." So here we get to make the move that I foreshadowed. Is that going to go-- we can say maybe this is like "depend" and "on." Maybe "think" is selecting for that clause and maybe that is like "on," so "on" is the head of the prepositional phrase. The prepositional phrase is named after "on." "On" is a preposition. Similarly, we're going to say this clause has, as its head, words like "that" and "whether." We have a name for words like "that" and "whether." We call them "complementizers," which-- I feel as though I've done a lot of apologizing already for terminology in this class so maybe I'll just stop. That's what we call them. We call them complementizers. Deal. You may have heard them called other things in English classes. "Subordinating conjunctions," they're sometimes called. Try not to get too hung up on that. In this class they are called complementizers. And they have the handy abbreviation C. So what we'll say is that a verb like "think" selects for CP. So the name we're going to give to this clause that comes after "think," "that I have won the lottery," we're going to call that a CP. Why? Well, because we've seen that this relationship of pickiness holds between "think" and "that" and between "wonder" and "whether," and that entitles us to believe "that" and "whether" are the heads of the phrases that "think" and "wonder" are selecting for. They're like the "on" in "depend on." Yeah. So that's the reason to draw the trees this way. So we'll say, when you have a clause like "that I have won the lottery" or "whether I have won the lottery," that thing is a CP. "That" or "whether" is its head, and those things are being selected by verbs like "think" or "wonder." Yeah. OK? All right. Those are all complementizers. OK. So are we done with syntax? We now have this belief in selection which is allowing us to explain why certain pairs of things are merging in the way that they are. So in this example, "I will tickle the child," when we ask why did you merge "tickle" together with the noun phrase "the child," we now have an answer. We get to say, yeah, we did that because "tickle" is a transitive verb. So if we look up "tickle" in the lexicon the lexicon will tell us, yeah, this is a verb that at least has a use in which it wants to combine with a noun phrase object. So that's why you merged the verb with the noun phrase "the child." And similarly, we might say, yeah, why did we merge the verb phrase together with the T "will"? And we can say something similar. The T "will" is selecting for a verb phrase. It needs a verb phrase as its sister, and so there. That's why we're merging these two things with each other. So we have beginners of explanations or answers to questions like, why did you merge those things in that order? Why did you merge those two things together? The answer is going to sometimes be, well, because there's a selection relation between those things. OK? But here's the problem. You can do anything with a feather. So I spent some time telling you there are verbs that need objects and there are verbs that need to not have objects. There are transitive verbs, there are intransitive verbs, and OK, there are verbs that can be either one. And I tried to move us quickly past that class of verbs, but they exist. So yeah, the child, we know why the child is in there and we think that the relationship between "tickle" and "the child" is a relation of pickiness. There are some verbs that can merge together with an object like that and there are other verbs that can't. But "with a feather" is not like that. You can tickle a child with a feather. You can walk down the street with a feather. You can stand with a feather. You can dance with a feather. You can do anything with a feather. Prepositional phrase "with a feather"-- you can write your dissertation with a feather if you work hard at it. Prepositional phrase like "with a feather," there's no pickiness involved. It can just combine with anything. So this does not fall under our generalizations about how selection works. We don't seem to have the kind of relation that we have between the verb and the object. Yeah, there are verbs that want objects and verbs that don't want objects, but we're not going to find a verb that needs "with the feather" in this sense, using a feather. Yeah? AUDIENCE: Could you almost look at it from a perspective of agency, the fact that you could say "with a feather" but not "with a dog." The dog is able to-- you can't tickle a child with a dog. NORVIN RICHARDS: Yeah. Well, it's complicated, right? You can tickle a child with a dog-- well, let's see. The fact that you cannot tickle a child with a dog, if it's a fact, I think is probably a fact about dogs. Right? I mean, so we're in "colorless green ideas" territory here. I was about to do the thing that everybody always does when I show them colorless green ideas where they say, "Well, but if 'colorless' and 'green' meant different things then the sentence would be OK." And I think tickling a child with a dog, I find myself saying, well, but what if it's a particularly hairy dog? And you grab it and rub it on the child in just the right way, you could imagine maybe being able to-- but yeah. "I will tickle the child with the hammer." Probably you could never tickle the child with a hammer, but this is 24.900 and we don't have to care about that. So some things are better tickling instruments than others, but thankfully I am not here to educate you about tickling instruments. That'll be somebody else's job, if anybody. Yeah. Not that I'm promising that there is an MIT class where you can learn about tickling instruments. I mean, I suppose there could be, but it's definitely not this one. Yeah. Yeah? Good. All right. OK. So "with the feather" is another kind of thing. Yeah. This is just saying what I just said. Doesn't seem right to say that "child" or "tickle" selects "with a feather" because you can do anything with a feather and, uh-- there. Yeah. So "I will tickle the child with the feather." "I will devour the child with the feather." "I will write a novel with the feather." "I will thrive with the feather." You can do anything with the feather. Some of these things are more or less plausible than others, but they're all grammatical. Yeah? Does that sound right? I don't know why I used "devour." So we said there are verbs like "tickle" and "devour" and "write" that need to have objects, or at least can have objects. And there are other verbs like "thrive" that pickily can't have objects. so not every verb can be followed by the child. I can "tickle the child." I can "devour the child." I can "write the child." But I cannot "thrive the child" no matter how hard I try. Yeah? AUDIENCE: In the previous slide-- NORVIN RICHARDS: Yeah? AUDIENCE: The fact that with something a little bit better [INAUDIBLE] verb. How does that relate to what you said earlier about how verbs select the preposition? NORVIN RICHARDS: So yeah, good. Nice point AUDIENCE: [INAUDIBLE] doesn't work. NORVIN RICHARDS: Yeah. Yeah, Yeah, yeah. That's a good point. So maybe this is a way to say it. If you have a verb and you give that verb everything it needs-- everything it selects, you can always then add "with a feather." So "I will depend on the child with the feather" is fine. You gave "depend" its prepositional phrase that it wanted, "on the child," and then you get to add "with the feather." "With the feather" is desert. You never need "with the feather," but you can always have it. That's right. And then yes, you're absolutely right. You can't say "I will depend with the feather" because that's a case where "depend" needs a prepositional phrase and it needs the preposition to be "on." But if you give "depend" what it needs, then you may always add "with the feather." Yeah. It's an optional extra, always. Yeah. Always, modulo-- "modulo"! That's a word I learned in grad school. It means "Always setting aside issues like, wait, what do you mean you will devour the child with the feather? Like, how are you-- what? How are you going to do that?" So some of these things it's not so clear what exactly-- or "I will thrive with the feather." Maybe that means something. I will live a long and happy life as long as I have the feather. So we're back to "colorless green ideas" territory, more or less. We're not in the business of trying to figure out which of these sentences make sense. We're in the business of trying to figure out which of them are grammatical, and they're all grammatical. That's all I'm asserting. Yeah? AUDIENCE: [INAUDIBLE] "...with this feather." Does "with this feather" modify "the child" or "tickle"? NORVIN RICHARDS: Yes. To which the answer is yes. You're right. So it's ambiguous. It can have either of those structures and either of those meanings. Yeah. So good point. So "I will tickle the child with the feather" could mean-- I think we talked about this in class-- I will use the feather to tickle the child, or I will tickle, possibly with my fingers, the child who has a feather. That's absolutely right. But "with the feather" doesn't-- so "with the feather," yeah, it's even more adaptable than I made it sound. It can combine with noun phrases and with verb phrases, and it can always combine with noun phrases. Concentrate on that reading where it's the child who has the feather, "I can tickle a child with a feather." I can tickle an adult with a feather, I can tickle a dog with a feather, I can tickle an orange with a feather-- would be hard, but it's grammatical. Yeah. Yeah. Yeah? So "with a feather"-- anything can have a feather, and you can always use a feather to do anything. That's the moral of all this. OK? People are leaving. Was it all the talk about devouring children? AUDIENCE: People are leaving. NORVIN RICHARDS: Sorry? AUDIENCE: I was trying to come up with something that wouldn't depend on this feather. NORVIN RICHARDS: "With this feather"? No. So. "Who are you with this feather?" "I am far, far more impressive with this feather." Yeah. Yeah. Yeah? OK. All right. So you can do anything with the feather. And conversely, so "with a feather" can combine with anything-- maybe not conversely. "With a feather" can combine with anything. Objects can't combine with just anything. You have to pick a verb that is transitive or at least can be transitive. So you can "tickle a child." You can "devour a child." You can "write a child." You cannot "thrive a child." So if you pick an intransitive verb then you can't combine it with a direct object. So there's a selection relation between the verb and its object. That's what we were just talking about. We have to look up verbs like "tickle" and "devour" and "write" in the lexicon and find out whether they can combine with objects or not, but we'll never have to look anything up to know whether you can add "with a feather." You can just always add "with a feather." Yeah? So yes, when we're building trees we're going to have selection relations that sometimes tell us, yeah, if we have, say, this kind of verb, we'd better have this kind of phrase combining with it. We have "devour." It had better combine with an object. Something has to get devoured. But then there are going to be other kinds of things like "with a feather" that are just always options. You can always put them in. Another terminology break. We call things that are selected arguments. So "the child" is an argument of the verb "tickle," in this example, and we call phrases that don't seem to be selected by anything, we call those adjuncts. Just the name for those things. Here we have to be careful. This is a point where people get confused sometimes. So let me see if I can say this. Knowing me, I will probably say this several different ways and we'll just see if I manage to say it in a way that makes it make sense because people get confused about this. Arguments. Here's a way to think about it. Arguments, like direct objects, arguments are picky about which heads they can combine with. So if you're asking, should I put in a noun phrase object for this verb? Well, you've got to know what the verb is, whether it's a transitive verb or an intransitive verb. Adjuncts are not. So adjuncts like "with a feather" can combine with anything. There's no selection, really. You don't have to look anything up to find out whether you can merge "with a feather." The confusion comes in like this. The problem is there are optional arguments. So I flagged this earlier. There are verbs that are transitive and there are verbs that are intransitive, and in the earlier slides where I was talking about transitivity I tried to concentrate on verbs that were comfortable in their identity as either transitive verbs or intransitive verbs, but there are many, many verbs that can be either one. So you can "dance" or you can "dance a hornpipe." Here's the thing. So you can "dance" or you can "dance a hornpipe," but when you "dance a hornpipe," "a hornpipe" is an argument of "dance" because you have to know whether the verb can be a transitive verb to know whether there can be a noun phrase there. The confusion comes in. People look at sentences like "I will devour the child with the feather." Let's make it "tickle." "I will tickle the child with the feather." Syntacticians are the kind of people for whom "I will devour the child with a feather" and "I will tickle the child with the feather" are basically the same sentence. We're just not interested in the differences between those. They're basically the same. People look at that and they go-- well, maybe I want it to be "devour." No, we can stick with "tickle." So what we say is this is an argument and this is an adjunct. To say that this is an argument is to say that whether it can be there or not is determined by what verb you've got, whether the verb is transitive or not. To say that this is an adjunct is to say that I don't care what the verb is when I decide whether to put that in or not. What people get confused by is this. Adjuncts are pretty typically optional. You can tickle a child with a feather. You don't have to, you can just tickle a child. There's never a requirement that an adjunct be there. And so this is one of those tests for whether you're looking at an argument or an adjunct that only works in one direction. If it's an adjunct it had better be optional. But if it's optional, it could be an adjunct or it could be a hornpipe. It could be an object, in these kinds of examples, that's combining with the kind of verb that can be either transitive or intransitive. So if you're looking at a phrase and you're saying to yourself "This phrase is optional," you're not then entitled to guess that it's an adjunct, necessarily. You only get to find out that it's an adjunct by asking yourself, will it matter what verb I have, let's say? Or what noun I have, depending on what you're combining it with. OK? Is that clear? Have I said it in enough different ways, Faith? AUDIENCE: Is this at all complicated if you treat the feather as, like, an animate entity? So if you were to actually [INAUDIBLE] a person and you would say, I danced with the feather. Would it mean you're dancing-- like, is that [INAUDIBLE]? NORVIN RICHARDS: Oh. Oh. Oh. Oh. Oh. Oh. Oh, dear. Well, let's reluctantly put feathers aside for a second and talk about something that we know is animate, like children. So you can dance with the child. You can play games with the child. You can eat lunch with the child. "With the child" is probably still an adjunct in that kind of reading. So "with the feather"-- you're raising a good point that with in English-- I was warning you about Indo-European prepositions. "With" can mean two subtly different things, right? So you're pointing out that if I tickle a child with a feather, I'm using the feather as an instrument for what I'm going to do. If I have lunch with a feather-- if I have lunch with a child, then that "with" is introducing somebody who is doing something together with me, right? We're both doing the activity in the verb frames. So those are maybe two different "with"s, but they're both ad-- those two prepositional phrases, they're probably both adjuncts. And "I will dance with the feather," I guess, could have either reading in principle. Yeah. Yeah. Yeah. Yeah? Cool. Nice example. OK. OK, so that's the distinction between arguments and adjuncts. Why are we bothering to distinguish arguments and adjuncts? Well, there are various places where they behave differently in interesting ways. So take a sentence like "I decided on the boat." That can mean a couple of different things. Somebody tell me something that it can mean. AUDIENCE: I made the decision on the boat. NORVIN RICHARDS: Sorry? AUDIENCE: I made the decision while I was on. NORVIN RICHARDS: "I made the decision while I was on the boat." That's one thing it can mean. What's the other thing? AUDIENCE: "I was a boat dealer and I was picking between two different boats and I decided on this one." NORVIN RICHARDS: Yeah. Yeah, yeah, yeah. So I was deciding between a boat and something else. You know, am I going to buy-- or maybe I was deciding between two different boats and I decided on the boat. I decided on the big black boat instead of the small white boat. Yeah. Whatever. Yeah. Or I decided on the boat rather than the motorcycle. I was deciding what to spend my lottery money on. Yeah. Yeah. It can mean either of those things. Maybe the next slide will do this and I won't have to mess with the blackboard. Yes. So those are the two readings, actually, in the same order that we got them. Faith's reading, which is the first one, I made my decision while I was on the boat. And then Joseph's reading, I chose the boat. It can mean either of those things. Is "on the boat" an argument or an adjunct? Given what we've said about arguments and adjuncts. AUDIENCE: It depends on the meaning that you intend. NORVIN RICHARDS: Yes. Yes. The answer to that question is yes. Is it an argument or adjunct? Yes. Yes, it is. Yeah. Suppose we take the first reading for "on the boat." So I made my decision while I was on the boat. Is that an argument or an adjunct, that version of on the boat? AUDIENCE: Adjunct. NORVIN RICHARDS: It's an adjunct. You can do anything on a boat. Sleep on a boat. Write your dissertation on a boat. Lots of things you can do on a boat. On a boat can combine with all kinds of things. But the second reading where I decided on the boat ends up meaning "I chose the boat," that's kind of an idiosyncratic fact about "decide" and "on," right? That "decide" and "on" can squash together to mean "choose." Right? You can't do this even with other expressions that more or less mean "decide." You can't say "I made up my mind on the boat" and mean "I chose the boat," I don't think. Right? And you certainly can't combine "on the boat" with any random verb and expect to get a new meaning. That's not the way life works. So there's an adjunct meaning of "on the boat" and there's an argument meaning of "on the boat." OK. Cool. Now, yeah, is it an argument or adjunct? Yes. So the first one is an adjunct, the second one is an argument. Good. Consider a sentence like, "I decided on the boat on the plane." Please, while you're considering the sentence, do not think about cases where boats are on planes or planes are on boats. There are planes that are also boats-- I've been teaching this class to MIT undergrads for a while. I know the kinds of moves that you are thinking about even now. So just consider cases where-- so the imaginable things that it could mean given those two readings for "on the boat." I think there are two. It could mean, "While I was on the plane I chose the boat," or it could mean "While I was on the boat I chose the plane." Yeah? In principle it ought to be able to mean either of those things. Can it mean both of those things? AUDIENCE: [INAUDIBLE],, but I just-- I decided on the boat on the plane. Like, deciding on the plane on the boat sounds much weirder. NORVIN RICHARDS: Yeah. Here, I'll tell you what. Let me write these readings down just so we can point at them as we're talking because otherwise I can tell we're going to get confused. I'm already confused and we've only just started. So one reading is "I chose the boat while I was on the plane." The other is "I chose the plane while I was on the boat." Is that legible at all? Can you read that, any of you? Yes? Sort of? Kind of? OK, good. So those are the two readings. Call them reading one and reading two, or we could call them the "choose the boat" reading and the "choose the plane" reading. And now we're trying to figure out which of those things can this mean. Kate? AUDIENCE: OK. So instinctively I chose reading one because the boat just-- I don't know. That feels like it makes sense. But then I thought, if you work hard enough to make sense, especially if you were to replace the second prepositional phrase with something more simple, like "at lunch"-- NORVIN RICHARDS: Oh. AUDIENCE: In [? formal, ?] or yesterday, I guess, which isn't-- I don't know. [INAUDIBLE] But "I decided on the boat at lunch" makes sense. You still have essentially the same content. NORVIN RICHARDS: Yes. Yes. So that's a nice point. Let me react to it by attempting to squash it. So don't change either of the prepositional phrases into anything else. But we are going to want to come back to that because that's a nice point that you're making. I think if I say "I decided on the boat at lunch," that we're back to the original ambiguity. Is that your feeling? That that can mean either I chose the boat at lunch or I decided while I was on the boat at lunch. Is that true, do you think? AUDIENCE: I just got lost. NORVIN RICHARDS: Me too. Me too. So yeah, stick to these prepositions or these prepositions. That sentence, we're trying to figure out what that sentence can mean. What can that sentence mean? Yeah? AUDIENCE: I think the first one would be more usual in the sense that if you tried to-- I mean, I know you told us not to replace anything, but-- NORVIN RICHARDS: It's all right. AUDIENCE: If you do try to replace "on the boat" with another adjunct, I don't know, like "the building," it would be a little weird. It would pass off as something-- I would pass it off as something that you could say, but I wouldn't quite say it and it sounds super weird to say. Like, "I decided in the building on the plane"? Is this clearly deciding on the plane? Decide on the plane, but it sounds weird because "decided on" should be kind of better. NORVIN RICHARDS: OK, so now we've had two votes for changing some of the prepositions, so I'll stop fighting it. We could also be thinking about, "I decided in the building on the boat." Is this grammatical? Sort of? Yeah? Yeah? And does this mean I chose the boat while it was in the building? It can. Yeah. I guess it's hard for me to be both in a building and on a boat unless the building is on the boat or the boat is on the building. Yeah. Many, many hands. Yes? AUDIENCE: Yeah. My thinking was that when the prepositional phrase is in argument, it's more attached to the verb by association. So it kind of makes less sense to split in the middle. So that sentence that you put below there doesn't quite go as well, and so that's why choosing the boat on the plane makes more sense. NORVIN RICHARDS: OK. Cool. So actually, what you just said and what Faith just said and what you just said a second ago, these dovetail nicely, I think. Let me now try to summarize something that I think all three of you said. Maybe we can say it like this. When I asked, can this mean "I chose the boat while it was in the building"? Everybody was like, nah. I think it can, but for me I have to pronounce it in a particular way. So I can say "I decided, in the building, on the boat." I think that's the easiest way for me to say it and have it mean that. So I did something fancy with my voice there that involved trying to hide this prepositional phrase. Yeah. It's sometimes called a parenthetical where you put in these things in strange places. There's lots of interesting work on parentheticals. So one thing that all of you are teaching us, maybe, is that it's dangerous for me to just show you these words on this slide and ask you what they mean. What I should really do is pronounce the sentence at you. So when you're asking yourself how many things can this mean, don't ask yourself what "I decided, on the boat, on the plane" means. Ask yourself, what does "I decided on the boat on the plane" mean? Should I do that again? How many of you think that it can mean number one? "I decided on the boat on the plane." How many of you think that it can mean number two? Is there anybody who thinks that it can mean number two and cannot mean number one? Is there anybody who thinks that it can mean number one and cannot mean number two? OK. So through the power of democracy we've come to the conclusion that it can mean number one and can't mean number two, except for a couple of you who feel that it can mean either one. Yeah? I think the ones who feel that it can mean either one are ones who maybe have ways of doing parentheticals that are less dramatic than I just did, this process of hiding a prepositional phrase. Because I agree that you can say "I decided"-- see if you agree-- "I decided on the boat on the plane" can mean "I chose the plane while I was on the boat." I have to say that in a particular way, play those games with pitch as I speak. Lots of interesting questions about what the heck I am doing when I do that. Yeah, you had a question a while ago. Sorry. Yep, you. AUDIENCE: What do you think about the sentence "I, in the building, decided on the boat." NORVIN RICHARDS: Whoa. "I, in the building, decided on the boat." You asked this question before. You're the guy who likes to modify pronouns with prepositional phrases. What's your name? AUDIENCE: I'm the guy who likes to modify pronouns-- NORVIN RICHARDS: OK, that's your name. OK, the guy who likes-- do you have a nickname? "Guy." We'll call you "Guy." Cool. So I, in the building, decided on the boat. Hey, you think I could do that? Yeah? Then we have the problem that if I'm in the building I can't also be on a boat unless really unlikely things are happening. Right? Unless there's a building on a boat or a boat on a building or something like that. And so you're kind of biased in favor of the reading where I chose the boat because we've also said that I'm in the building, thanks to the prepositional phrase that you're named after, the one that modifies the pronoun. That's a nice example. Faith? AUDIENCE: Don't we have something similar that happens if you say, "I decided to run on the boat?" Because either you're running on the boat or you're deciding to run for president-- I don't know. Campaign while you're on the boat. NORVIN RICHARDS: Yeah. Yeah. "I decided to run on the boat." That's a nice example. Yeah, so I decided to run for president or I decided to be in the marathon while I was on the boat, or I decided, "Here's a fun thing I'll do. I'll run on the boat." Yeah. Right. Those are good examples where-- actually, all of these are examples where-- yeah, let me back up. So "I decided on the boat on the plane." Through the power of democracy we've come to the conclusion that that means number one. What that means is there are two places for on the boat for prepositional phrases to be. There's the place where it gets to be an argument and there's the place where it gets to be an adjunct. Right? And the place where it gets to be an argument is closer to the verb. Several of you said this, right? So if it's right after the verb it gets to be an argument, and if it's not right after the verb then it needs to be an adjunct. And in simple sentences, like "I decided on the boat," you don't know where it is. So you don't know whether it's in the argument place or the adjunct place, and so the example is ambiguous. It's kind of like the ambiguity of "I will tickle the child with the feather," where you don't know whether "with the feather" modifies "the child" or the verb phrase "tickled the child." Or, and now this is why finally I'm becoming relevant to what you just said, Faith, that's another example where "on the boat," it's ambiguous where you put. It could be a couple different places in the structure and it could modify different things. Yeah. So we're getting these ambiguities. And the fact that these ambiguities disappear if you pile these prepositional phrases up disappear for most of us. Some of you are more creative with prepositional phrases. Kateryna, for example, has great prepositional phrases-- go ahead. AUDIENCE: Is there a reason why we're ignoring the intonation angle? Is this like a competence versus performance situation? NORVIN RICHARDS: No. No. No. This is more a let's start with imagining that there's no wind resistance when we're doing physics, right? This is a, yeah, that's a really interesting complication, which we need to study what the heck is going on with intonation. But we're going to start by developing a theory of English sentences in which nothing interesting is happening with intonation. And then because this is 24.900 we will probably never get out of that. But intonation is a fascinating topic that you then go try to add to your model. Yeah. Yes? AUDIENCE: So to correctly write sentence two and the sentence below, would you put commas around-- NORVIN RICHARDS: Oh, yeah. AUDIENCE: --"on the boat"? NORVIN RICHARDS: Oh, I see. So yes, if we-- you mean the string "I decided on the boat on the plane," if we wanted it to have reading two? Yeah, commas are one way that you indicate the special intonation that makes this possible. So commas-- no language, actually, has a very good way to indicate intonation. It's not something that we're good at spelling, but punctuation marks are one of the things we do to do that. Yeah. Nice point. Raquel? AUDIENCE: I was thinking that the rule that you were saying about the number one being the preferred computation. It makes a lot of sense in that the-- if there was going to be a situation where a thought two sounded correct, I think it would have to be something where the adjunct sounds like it needs to come really close to the verb decide on a whim. If you said, "I decided on a whim on the boat," I feel like I would be more likely to say that made sense than "I decided on the boat on the plane" because I need to hear and decide on a whim. NORVIN RICHARDS: Yeah. That's a very nice point and a really nice example. I mean maybe-- so I think the next slide is going to echo a version of that. What we're seeing is-- so here I've got an idealized version of the judgments that we more or less got. There are some people for whom you can get either reading. Interestingly, nobody has the opposite judgment, that "I decided on the boat on the plane" preferentially means "I chose the plane while I was on the boat." And what we're seeing then is that if you have-- so one difference between arguments and adjuncts, the reason that I introduced you to them, is that arguments are picky about which heads they can combine with and adjuncts are not. You can do anything with a feather but something like a direct object, you have to look at the verb to see whether you have it or not. And what we're also seeing is that if you have a head that has both an argument and an adjunct, and that's the "decide on the boat on the plane" thing, then the argument is closer to the head. So far, yeah? And this connects with Raquel was just saying. One way to think about it, maybe, is to say, if you have these two prepositional phrases and you have this verb and you're trying to decide, in what order shall I merge these things with each other? Apparently the answer is, well, first, see what the verb would like you to do. So if the verb needs something, then satisfy the needs of the verb first and then fool around with adjuncts. Yes. You can do adjuncts later. And Raquel's making the point that there are various things in life that verbs could need and we have to develop a theory of all of them, and today we haven't. Yep. Any questions about any of this? I think, if I remember right, this might be a good place to pause and see what I was going to do next. Yes, no. So we're about to do another test for arguments and adjuncts. And we will do it, but we'll do it next time where "next time" is not next week, but the week after, right? Because you guys have to go break spring. So have a great spring break, everybody. Go do something relaxing, and we will see you again in a couple weeks. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_5_Phonetics_Part_1.txt | [SQUEAKING] [RUSTLING] [CLICKING] [SIDE CONVERSATIONS] NORVIN RICHARDS: OK, so let's start back up. Today, let's see. We are transitioning from morphology into phonetics. So I hope you enjoyed morphology. It's not as if you won't ever do any morphology again. But that's it for lessons on morphology-- lectures, I guess. Today is phonetics, which means that today we begin making funny sounds at each other. So everybody limber up your vocal tracts. Let's see. I'm trying to remember if there's anything that I ought to announce. You remember, maybe, that problem set 1, which confusingly is your second problem set, is due on Thursday. Normally, it would be due on Tuesday. But because I am technologically challenged, it's due on Thursday. Speaking of being technologically challenged, I just figured out how to get the projector to project over there instead of in the middle so that I won't have to write everything twice running back and forth across the room. [APPLAUSE] Thank you. Thank you. Thank you. I'm going to mention this when I go up for-- my chair is trying to decide whether to give me a raise. I'll be like, big academic achievement of the year was figuring out the technique. So if anybody misses the old days, like you were amused watching me run back and forth, or this turns out to be too small or something like that, let me know. And we'll go back to the old days. OK. So when we speak, if we're speaking an oral language, if we're not signing, what we are typically doing is producing a flow of air, which typically comes out of your lungs, but not necessarily. We'll talk about that. And it gets obstructed in various ways in the vocal tract. And so one standard way of talking about different kinds of speech sounds is to talk about where in the vocal tract the flow of air can get obstructed and how. And that's what we're going to do. So first, we'll talk about where. So one way of categorizing the various things that your vocal tract does to the airflow is by what's called place of articulation. That is, where in your vocal tract is the flow of air getting obstructed? So for example, there are what are called bilabial sounds. These are sounds which are made with both lips. This picture over here on the right is what's called the sagittal section. That is, it's a picture of someone's head cut in half so that you can see the stuff that's inside. And those arrows are meant to get you to imagine that this person is making a sound by putting their two lips together. So that's what you do for the sounds that are at the beginnings of words like "paint," and "bath," and "mouth," and well, "wipe," where your lips don't touch, but they both move. Yeah? Everybody feel free to confirm to yourselves in the privacy of your own mask that that is what you're doing when you make these sounds. Now next to the sounds-- so I have these words-- "paint," "bath," "map," and "wipe." And then next to them, I have these symbols in brackets. And they may not look at it, but these are extremely technical symbols. These are symbols of the International Phonetic Alphabet. So linguists have a system for writing sounds down so that we'll all know what kind of sound we're talking about when we're talking about sounds. And-- STUDENT: [SNEEZES] NORVIN RICHARDS: Bless you. A lot of the symbols of the International Phonetic Alphabet resemble letters of the English alphabet. So we have started here with symbols that should all look familiar. So the symbol for the sound at the beginning of "paint" is the letter p. Yeah. And so that's a symbol of the International Phonetic Alphabet. As we go along, we will be seeing weirder and weirder symbols from the International Phonetic Alphabet. So please enjoy it while it's easy. Yeah. So far, so good. Yep. There are also what are called labiodental sounds. Labiodental sounds involve your top teeth and your lower lip. If you think about what you're doing at the beginning of a sound like "face" or "vase," what you're doing, at least if you're me, is bringing your lower lip pretty close to your upper teeth and obstructing the flow of air there so that there's some turbulence. Yeah? There are, I believe, no languages in which labiodentals are made with your bottom teeth and your upper lip. It's a little hard to do-- "face," "face." We'll talk about other kinds of places of articulation that English doesn't use, but I think that one just doesn't exist. And again, two symbols of the IPA, which again, are not going to be too hard for you to learn. Yeah, so the symbol for the sound of the beginning of "face" is an f. And the symbol for the sound of the beginning of "vase" is a v. Here we get our first two IPA symbols that are not letters of the English alphabet. These are interdental sounds. Interdental sounds are linguistically not hugely common. We have them in English. There are various dialects of Arabic that have them. Some of the Berber languages have them. But they're not all that common. These are the sounds of the beginnings of words like "thistle" and "this," where you're sticking your tongue between your front teeth and making air flow out. Yeah. "Th" [pronounced as in "thistle"], "th" [pronounced as in "this"]. English doesn't spell these differently, at least not reliably, right? So we spell both of these with a "th." That's one of the reasons the International Phonetic Alphabet is there. It's so that we can unambiguously talk about what we're talking about. So the sound of the beginning of "thistle" and the sound of the beginning of "this"-- they're not the same sound. We'll talk about the difference between them shortly. But at this point, maybe it's just clear they're not the same sound. So there are two different IPA symbols for them. The Greek letter theta is used for the sound at the beginning of "thistle." And that second letter is an old English letter. It's still used in Icelandic. It's sometimes called "edh," and it stands for the sound at the beginning of "this." Yep. OK. Then there are what are called alveolar sounds. If you put your tongue-- sorry, let me just talk first. If you put your tongue at the top of your mouth and drag it-- so put it first against your front teeth and then drag it backwards along the top of your mouth. If you're like me, you've got your teeth. And then there's this flat-ish space. And then it goes up, yeah? So from your teeth, you go back to this little plateau. There is this little plateau there right behind your teeth, this gum ridge. And then your mouth begins to go up higher, yeah? That ridge is called the alveolar ridge. And if you put your tongue there, then you are all set to make sounds like the sound at the beginning of the word "teeth," or "duck," or "nail." And for sounds like "sail" and "zoom," again, you're not actually touching that position. We'll talk more about the difference between "s" and "t" in just a second. But they're both alveolar sounds. Your tongue is pointing in the general direction of your alveolar ridge. Everybody have that sensation? You should all be exploring the insides of your own heads right now, yeah? OK. There's a type of sound that has been called alveopalatal. It's also called postalveolar. I'll call it postalveolar. If you think first about an "s" and then you think about a "sh," the "sh" at the beginning of "ship," if you go back and forth between them-- s, sh, s, sh-- you make a very soothing sound. Yeah? And maybe what you can feel is that your tongue is rocking back and forth. At least, that's what my tongue is doing. For "s," it's pointing at the front at my alveolar ridge. And then for "sh," the middle of my tongue curls up a little bit, goes a little further back. Is that the feeling people are having as they're going back and forth between these? Here's another new symbol. That's the symbol for the sound of the beginning of a "ship," and another new symbol. That's the sound in the middle of "azure," the "zh" sound. Both of those are postalveolar sounds. You'll sometimes see them called alveopalatal sounds. I'll try to remember to always call them postalveolar. Yeah. A little further back, there are what are called palatal sounds. These are either even further behind the alveolar ridge, back where the roof of your mouth gets as high as it's going to get. And the one palatal sound that we have in English is the "y" sound at the beginning of "year," yeah? And here's the first IPA symbol that is deliberately designed to confuse you. The IPA symbol for that sound, the "y" at the beginning of "year," is a "j," yeah. This is because the IPA was not invented exclusively by speakers of English, right? So it was invented also by speakers of languages like German, in which they use this letter for this sound. "Ja," for example, the German word for "yes," spelled J-A, yeah? Great. OK. The letter y is used for something else. We'll get to it. OK. And then continuing our tour of the mouth, so we're working our way backwards through the mouth, there are what are called velar sounds. In the velar sound, the body of your tongue is up against what's called the velum, which is the soft tissue at the back of your mouth that we're going to be hearing more about in a second. It's responsible for partitioning your mouth from your nose through your oral cavity. So that's the place that your tongue is touching when you say the sounds at the beginnings of words like "kernel," and "caught," and "gone," and when you make the nasal sound at the end of a word like "sing." So if you think about where your tongue is, it's up in the back there. Does that sound right? Yeah? OK, cool. And then going even further down, further down in your throat, you've got the vocal cords, the glottis, this space that's down there in your throat around your larynx, and your vocal cords. Or they're sometimes called your vocal folds. This process that you've got back here in your throat for closing off the air can be closed to make what's called a glottal stop. English doesn't make a huge amount of use of the glottal stop. But it's what shows up at the beginnings of words like "uh-uh," that catch that you're getting in your throat. "Uh-uh," meaning "no," right? Or "uh-oh," meaning "oh, dear." Yeah? That catch that you're getting in your throat-- that's a glottal stop, yeah? There are languages that are into glottal stops that have lots of them. English is not. And the way you make that is by basically slamming your vocal folds together to close off the flow of air. You can also hold them close together and let the air whistle past. That's how you make an "h," right? As in, what's my word up there-- "help," yeah. So you're just slightly abrading the flow of air. OK. Yeah, there are questions about any of that? So that was just a quick tour through the vocal tract. Yeah? STUDENT: Where's, like, "chuh"? NORVIN RICHARDS: Oh, we haven't gotten to "chuh." Yeah, but good question. We can think of what that is. A "ch" sound at the beginning of a word like "church," also at the end of a word like "church," is a dynamic sound, right? Ch-- I think your tongue is in motion as you are making that sound. Yeah. So it makes one-- it completely stops the flow of air. And then it gradually peels back and allows the air to flow out. You can think of that first thing it's doing as-- I'm on completely the wrong slide-- as being like an alveolar stop, right? So it's like a "t." And then as it peels back, "ch," you end up with something like a "shuh," like a postalveolar. So it's a pretty complicated sound. And we'll talk about it. Yeah, that's a really good question. Other questions? Somebody had a question? Yeah. STUDENT: What's the difference between a glottal stop and an ending consonant? NORVIN RICHARDS: I'm sorry. Say it again? STUDENT: What's the difference between a glottal stop and and, say, an ending consonant? NORVIN RICHARDS: An ending consonant? STUDENT: Yeah. NORVIN RICHARDS: What kind of ending consonant did you-- STUDENT: [INAUDIBLE] NORVIN RICHARDS: Oh, oh, oh. Yeah, so that's a really good point. So glottal stop-- take a word like put, right? If you say put, your tongue-- at least, my tongue-- touches the alveolar ridge. It goes where I said it would, yeah? But you're right. You also make a glottal stop, at least the way I just said it. Put, yeah? You could contrast that. If you didn't make the closure at the alveolar ridge, it would sound like "puh." "Puh," right? Which is not an English word for me. There are dialects of English in which that's something you would say, right? "Puh." There are places in English where things that we write as other kinds of sounds actually are, in fact, glottal stops, at least in my English. So the difference for me between "can" and "can't"-- my wife, who is Japanese, is driven crazy by the difference between "can" and "can't" because they're virtually the same, right? It's very hard for her to figure out, often, whether I'm saying "can" or "can't." Because the difference between them is really mostly just-- "can't" is really just "can" plus a glottal stop. I'm not saying "can't," usually, unless I'm being very emphatic. Yeah, good question. Yes? STUDENT: Is it cool for the same sound to be heard audibly but with a different part of your mouth. NORVIN RICHARDS: Yes, yes, there are things like that. Do you have something in mind? STUDENT: No. NORVIN RICHARDS: Oh, OK. So we haven't gotten yet to "r." We'll get to "r," eventually. But actually, people discovered at a certain point-- so people investigate this kind of thing in all kinds of ways. One is the kind of thing for all doing, where we just sit and say, what is my mouth doing? Hmm. There's other kinds of work where people classically paint the roof of your mouth with stuff that will come off. And then you have people produce a sound. And then you stick a camera in their mouth and take pictures and see which parts of the paint came off. You stick tubes down people's nose to measure airflow. You do all kinds of horrible invasive things. These days, people do a lot of MRIs. I'm going to put on the website a couple of websites that have charts of all of the sounds that we're going to talk about plus many more together with MRIs of the insides of people's mouths making these sounds so that you can see the anatomy that's involved. You won't just have to think about it. One of the things people have discovered as they're doing this kind of work is that people just have different ways of producing "r," that there are just different kinds of things you can do with your anatomy to make an "r" sound. And that's probably related to the fact that "r" is one of the kinds of sounds that people classically have trouble with. If you've been around small children, for example, it's standard for them to not quite get "r" right and to say something that sounds more like a "w" at a certain stage. So I'm sorry. The short answer to your question is yes. And "r" might be an example. Yeah. Good questions. Any other things we want to talk about? OK. OK. So we went through that. OK. So now I already alluded to the fact that I had these slides that had various places of articulation on them. But of course, each slide had multiple sounds on it. And so a place of articulation is obviously not the whole story. Here's another part of the story. It's what's called voicing. So if you think about an "s" and a "z," those are both alveolar sounds. Your tongue is reaching toward the alveolar ridge. But they're not the same. What's the difference between them? If you think about-- if you go back and forth between them, s, z, s, z, s, z, you can feel a buzzing. [CHUCKLING] And if you put your hand right here-- sorry, if you put your hand right here, right here on your throat, then when you make the "z" sound, you can find the source of that buzzing. It's right there in your larynx, yeah? What's happening when you make a "z" is that you're holding your vocal folds across the flow of air in such a way that they will flap in the breeze as the air goes by. They'll vibrate, yeah? It's like whistling with a blade of grass, or playing a reed instrument, right? It's getting something to vibrate really fast. And that's what you're hearing. That's that buzzing sound that you're hearing and feeling if you put your hand right here when you're doing a "z." We say that "z" is voiced and that "s" is voiceless. So it's a distinction in voicing. Yes? STUDENT: Just the fact that [INAUDIBLE]?? NORVIN RICHARDS: Yeah, so the difference between "cats" and "dogs"-- yeah. So exactly. I was going to get to that later, but yes, that's exactly it. So what's the difference between a cat and a dog? Well, as far as a phonologist is concerned, the difference is that "cat" and "dog" end in sounds that differ in voicing. Is "t" voiced or voiceless, the "t" at the end of "cat"? Voiceless, yeah. And the "g" at the end of "dog" is voiced. And you're choosing "s" or "z." You're putting the sound that agrees in voicing with the consonant that's at the end. That's exactly right. Yeah? STUDENT: So then is it possible to whisper "z"s and "g"s? Are they-- NORVIN RICHARDS: Yeah. So this is a really good question which I was hoping not to get asked quite so soon. But no, that's good. OK. So she just asked, what are you doing when you're whispering? So if you think about what you're doing when you're whispering, first of all, (WHISPERING) your vocal cords are not vibrating at all. (SPEAKING NORMALLY) Your vocal chords are not vibrating at all at any point, right? So that should mean that you're not making the distinction between "s" and "z," or between "f" and "v," or-- what's my other example here-- "th" [pronounced as in "thistle"] and "th" [pronounced as in "this"], right? So the difference between "bath" and "bathe" is that the "th" [pronounced as in "this"] is voiced and the "th" [pronounced as in "thistle"] is voiceless, yeah? But that doesn't seem to be true, right? If you whisper "safe" and "save"-- (WHISPERING) "safe," "save"-- (SPEAKING NORMALLY) you have the feeling that you can hear the difference between them, yeah? I think if you were to whisper one of these-- so do a controlled experiment. Go back to your dorm. Whisper to your dorm mate, (WHISPERING) "safe." (SPEAKING NORMALLY) And then find out what they think you said. [CHUCKLING] Maybe warn them in advance of what you're going to do. [LAUGHS] Oh, boy. The complaints. Yes. So there's got to be something else going on, right? Let's do this experiment again, though. So it's easiest for you, actually, with "f" and "v." So let's do this thing again. If we go back and forth between "f" and "v"-- f, v, f, v, f, v, which of them is voiced? The "v," right? The "v" is voiced. OK. Now do it again, but whisper-- f, v, f, v, f, v, f. Does anybody feel a difference between "f" and "v"? Not here, right? Yeah? STUDENT: I guess the "v" is more-- there's more air [INAUDIBLE] NORVIN RICHARDS: So yes? STUDENT: Also, my mouth is opening slower, I think. NORVIN RICHARDS: Ah-- f, v, f, v. Yeah, there might be a difference in the aperture of your mouth. I think you're right. And I think you're right, too, that there's a difference in how fast the air is flowing. For me, I actually have the opposite of your feeling. For me, f, v, f, v-- for me, there's more air when there's an "f" and less air when there's a "v." Yeah? STUDENT: I don't have it with the "v," but when I say "v," my lips are a little narrower. NORVIN RICHARDS: Ooh. F, v, f, v. Bah. Yeah. My jaw is moving as I do that. And I think that is affecting what my lips are doing. They're coming together more. So I think what's happening may be-- maybe this is an attempt to answer your question-- is that so I just said sounds can be either voiced or voiceless. If they're voiced, what it means is that your vocal cords are vibrating. And I made it sound like the way you do that is, well, you stick your vocal cords into the flow of air, right? And you make them vibrate. But I think maybe what we're learning is that you do some other things, too, to optimize the flow of air so that you will get a good vibration going, that maybe if the flow of air is too fast and maybe we're learning things about the aperture of your mouth as well, there's a way of making sure that the pressure that you're getting, the rush of air that you're getting on your vocal cords will make them vibrate in just the right way. And you're manipulating all of that stuff without thinking about it. And you can still hear it when you whisper. So when you whisper, you're not engaging your vocal cords, but you're doing all the other stuff. And that's what you're using to hear the difference. There's experimental work on this. This is the kind of thing people try to figure out. Yeah, really good question. OK. OK, so your vocal cords can either be vibrating or they can not be vibrating. So you have voiced sounds and you have voiceless sounds. So "s" and "z" and "t" and "d' are all alveolar. But "s" and "t" are voiceless. And "z" and "d" are voiced. Does that all sound right? Is anyone upset by any of that? Disturbed? Alarmed? Hungry? Yeah, anything? OK, good. So it's back to the Polish plurals. So we saw before, we convinced ourselves, or I convinced myself-- and I tried to take the rest of you with me as collateral damage-- that Polish has words that end in "k" and words that end in "g" underlyingly. But it also has a rule that changes "g" to "k" at the ends of words. That was Polish, right? OK, but it's not just "g." So we can see some other pairs of words. I don't have any more minimal pairs for you. But you can see there's the same general tendency that if we look at singulars in Polish, that they can end in sounds like "k," or "b," or "t," or "s," right? That's what we're seeing in these pairs. And when you pluralize them, some nouns that end in "p" still end in "p" when you add the "e," the suffix, like corpse. But some change the "p" to a "b," yeah? And the same deal for these other ones, right? So what we're learning is it isn't just that "g" becomes "k" at the end of a word in Polish. There's this more general thing. What's the more general thing? What's going on here? What's the difference between "g" and "k"? So they're both velar. Yeah? STUDENT: One's going to be voiced. NORVIN RICHARDS: Yeah. Which one is voiced? STUDENT: The "g." NORVIN RICHARDS: The "g," yeah. So what's happening is that the "g," which was voiced, is becoming voiceless at the ends of words in Polish, yeah. What's happening with "b" and "p"? Joseph? STUDENT: Voiced "b" becomes voiceless. NORVIN RICHARDS: Yeah, the voiced "b" becomes the voiceless version, which is "p," yeah? Those are both bilabial sounds. They involve both your lips. Yeah. Yes? STUDENT: How is "b" voiced? NORVIN RICHARDS: Sorry? STUDENT: How is "b" voiced? NORVIN RICHARDS: How is "d" voiced? Oh, "b." STUDENT: Yes. NORVIN RICHARDS: "b." Buh. Buh. You're raising a good point. There's a reason that I started with sounds like "s" and "z," and "f" and "v." Because you can go s, z, s, z for as long as you have, breath right? Whereas buh-- there's a limit to how long you can "b," yeah? We just said the way voicing works is that you've got air flowing across your vocal folds and making them vibrate, right? And for a "z," you can see how that would work. So the air just flows. For a "b," well, the air only has so far to go, yeah? That's one reason you can't keep a "b" going for very long. All that air-- it has to flow past your vocal folds to get them to vibrate a little bit. And then it gets to your mouth. And then it has to stop. So there are other departments that are better at this than I am. But the air pressure in your mouth is going to build up past a certain point. You won't be able to keep doing that. But you do it for as long as you can. That's the sense in which it's voiced. Yeah, good question. Yeah? OK. So yeah, what's happening in Polish is not just "g" becomes "k." It's voiced sounds are becoming voiceless. So "z" becomes "s." "d" becomes "t." "b" becomes "p." And "g" becomes "k," yeah? So it was sometimes called final devoicing. And it's a cross-linguistically quite common phenomenon. OK. So yeah. All right. So we talked about place of articulation. And we've talked about voicing. Now we need to talk about another dimension for categorizing sounds, which is called manner of articulation. So think about "s" and "t." They're both alveolar. And they're both voiceless. But they're different from each other. And the way we distinguish them is via what we call manner of articulation. So they're both voiceless alveolar sounds, but "t" is a stop. And "s" is what's called a fricative. So stops are also called plosives. I promise never ever to call them plosives. I will always call them stops because that's what I grew up calling them. But you will sometimes see things written in which they're called plosives. In homeworks, if you ever need to write about them, feel free to call them either one. Doesn't matter. I like calling them stops because they're named after the fact that, well, they stop the flow of air. That's what they do. That's what a stop is. And so the air is coming out of your lungs. And it gets stopped. Fricatives like "s" are sounds in which you don't stop the flow of air, but you narrow some aperture enough to create turbulent air flow, which you hear as a hissing sound. So sounds like "s" and "sh" and "f" and "th"-- these are all fricatives, yeah? OK? So for "t" and "d," the airflow is stopped. For "s" and "z," the airflow is restricted, but is not stopped. So you hold your tongue close to the alveolar ridge, but you allow air to keep flowing through. This is the conversation we just had. That's why an "s,", you can keep going for as long as you have breath in your lungs, whereas a "t," you can't keep executing it. OK. OK. So now we have these three ways of categorizing these kinds of speech sounds-- place, and manner, and voicing. Place of articulation, manner of articulation, and voicing. So there are a bunch of places of articulation over there on the left. And the sounds that we have mainly talked about have been either stops or fricatives. And then each of these places on the table-- there's a pair of sounds. And I hope I managed to do this right. Yes, it looks like I did. In all of these pairs, you've got both a voiced and a voiceless sound. Which one is first? STUDENT: The voiced sound NORVIN RICHARDS: The voiced sound, yeah. So in all of those, there's a pair. So you've got "zh" and "sh," for example, the postalveolar fricatives. And the "zh" is the voiced one. And the sh is the voiceless one. Yeah? OK, good. All right, so now new class of sounds. This is a new manner of articulation. So we've got "d," which is a voiced alveolar stop, and "z," which is a voiced alveolar fricative. And now we need to think about "n." Well, "n" is voiced-- nnnnn. And it's a stop in the sense that you are stopping the air from flowing through your mouth. If you think about what's happening inside your mouth during an "n," your tongue is jammed against your alveolar ridge-- nnnnn-- just as it would be for a "t," So I love the sound of phonetics in the morning. It's great. [CHUCKLING] So when you're doing an "n," you've stopped the flow of air in your mouth. And so "n" is technically a stop, right? It's called a stop because there's no air coming out of your mouth. But there's air coming out somewhere, right? It's coming out of your nose, yeah? "nnnn." And you can tell because if you hold your nose, [demonstrates] you will not be able to make an "n," yeah? So the reason that you can keep an "n" going, again, for as long as you have breath, is, well, the air has someplace to go. It's going out your nose. What's happening when you make an "n" is that you are lowering the velum, which is this doohickey in the back of your mouth that partitions your nose from your mouth. By lowering it, you're allowing air to flow through your nose. So for "t" and "d," the airflow is stopped at the alveolar ridge. For "n," the airflow is also stopped at the alveolar ridge. It can't go through the mouth. It's stopped right there. But it goes through your nose. This is a nasal stop, a voiced alveolar nasal-- people often just call them nasals because nasal fricatives are messy. Actually, not possible. Possibly. OK? So let me go back. "n" is an alveolar nasal. What would a bilabial nasal sound like? "mmmm," yeah? That's an "m." You close your lips and allow the air to flow through your nose. And a velar nasal? "nnnng." Remember that velar means the place of articulation for a "k" or a "g." So you make a "k" or a "g" sound-- kuh-- and then just let air flow through your nose-- "nnnn." That's the sound at the end of a word like "song" or "king," making a velar nasal. Yeah? English doesn't allow words to begin with velar nasals. But there are languages that do. So Tagalog, for example-- the word for now is [TAGALOG]. So it starts with a velar nasal. One of the entertaining things about learning Tagalog is learning how to make sounds that start with velar nasals. If you're an English speaker, you're not used to it. Yeah? OK. So there there's the table again. So we've got stops and fricatives and nasals, nasal stops. Things can be voiced or voiceless. And we've got all those places of articulation over there on the left. Are there questions about any of this? Is anybody looking at this and saying, whoa, this table has grown out of my ability to keep up? Yeah? STUDENT: Question about the [INAUDIBLE].. NORVIN RICHARDS: Yeah. STUDENT: -- English doesn't allow? NORVIN RICHARDS: English doesn't allow words to begin with velar nasals. We don't have any words that start with a sound that's at the end of words like "song." We don't have words like [NON-ENGLISH].. English doesn't have words like that. There are plenty of languages that do. Tagalog is one. Cantonese is one. There's a bunch of others. But English doesn't have that. Yes? STUDENT: Well, I noticed there's not "j" in brackets for [INAUDIBLE] palatal? NORVIN RICHARDS: Oh, yeah. STUDENT: Actually, I don't actually know where this "j" would go. NORVIN RICHARDS: Yeah. So we haven't yet gotten to the kind of sound that that "j"- the sound of the beginning of "year." You're absolutely right. There's a gap there. And I've put the palatal line there partly to get you to ask exactly that kind of question. So one of the points of this table-- it's like when the periodic table was first invented, I guess. So we have the system for categorizing sounds. And now we get to look at it and say, well, wait. Why isn't there anything there or there? And what would it be like if it were there? We'll do a little bit of that in a second. But yeah, you're right. So we haven't yet talked about the kind of sound that the IPA symbol "j"-- the sound we usually write with "y" in English, the sound of the beginning of "year"-- we haven't put that on the table yet. You're right. It's not a stop, right? It's also not a fricative because you're not making any turbulence in the airflow. And it's not nasal. So it's another kind of thing. We'll get to it. Good question. Other questions about this? OK, all right. So yeah, OK. So this way of classifying sounds leads us to wonder about gaps. Yes. Thanks, Norman. Yeah, you set that up nicely. So let's think about some of these gaps. So English, for example, has bilabial stops, "p" and "b." And it has a bilabial nasal, an "m." But it doesn't have a bilabial fricative. What would a labial fricative sound like? Fh, fh, which sounds like blowing out a candle, right? Open your lips just enough to let some air come out and then blow. STUDENT: [TRILLS] NORVIN RICHARDS: Ah, that's a bilabial trill. We will get to that. [LAUGHS] Either that or you were just having fun. I'm not sure. There are languages that have that. OK. So bilabial fricative-- English doesn't have a bilabial fricative. But there are languages that do. Japanese does, for example. So in Japanese, when people were writing Japanese in the Roman alphabet, when they write an "f," like when they write the name of this mountain, they'll write it with an "f." But in Japanese, that's not an "f." It's a bilabial fricative. It's "Fhuji," "Fhuji." If you learn Japanese, you must learn to pronounce the "f" bilabially rather than labiodentally. So in English, we have a labiodental "f" with our lower lip against our teeth. In Japanese "f," your teeth are not involved. It is only your lips, yeah? Yeah? What would a buh bilabial fricative sound like? Is this a voiced or a voiceless bilabial fricative? STUDENT: Voiceless. NORVIN RICHARDS: Voiceless. Fh, fh. What would it sound like if it were voiced? Vh, vh. And that exists, too. There are dialects of Spanish, for example, that have that between vowels, if you have the letter B between vowels in words like "abuela." That "b" has that sound-- a voiced bilabial fricative. Moving across the chart, I've got a nasal stop there. Is that nasal voiced or voiceless? Voiced. "m." What would it sound like if it were voiceless? I should step back. [EXHALES SHARPLY] Yeah, English doesn't have that. [EXHALES SHARPLY] But there are languages out there that do, languages like Hmong, for example, which is a minority language spoken in-- actually, it's quite a large minority language spoken in Vietnam. Hmong has that kind of sound. Tibetan has that kind of sound, too. OK. Let's skip palatal and do velar. We've got velar stops in English-- "k" and "g," kuh and guh. And we've got velar nasals, nnnn. What would a velar fricative sound like? Kh-- yes? She's alerting me that she's not just hissing at me. She's making a velar fricative. Yes. Kh, kh, right? Is that a voiced or a voiceless velar fricative? STUDENT: Voiceless. NORVIN RICHARDS: Voiceless, yeah? Kh. Yeah? English doesn't have that. But there are languages that do-- languages like German, say. That's one of the sounds that they write with the letters C and H at the end of composers' names, like Bach, right? Johann Sebastian Bach. His name ends with a velar fricative. Russian has this sound, yeah? It shows up in the names of authors like Chekhov, that kh sound. What would a velar fricative sound like if it were voiced? Lhg. Lhg. And English doesn't have that, either. But there are languages that do. Again, there are dialects of Spanish where if you have a "g" between vowels, it'll get this kind of sound, in words like "agua," yeah? OK. So we've talked about various kinds of nasal stops. So an "m' is a bilabial nasal, where you close the flow of air at your lips and allow the air to flow through your nasal cavity by lowering your velum. So you get mmmmm. Or you can stop the flow of air in other places. You can have an alveolar nasal, right? "n." You can have a velar nasal-- "ng." What would a glottal nasal sound like? Trick question. You would need surgery. Yeah. So the way nasal stops work is you're stopping the flow of air somewhere in your vocal tract. But you're allowing the air to flow through your nasal cavity, right? That's what a nasal stop is. Here, let's get back to one of these sagittal sections. So that "n" there-- what you're doing is you're stopping the flow of air there at the alveolar ridge, right? But you're lowering the velum to let the air flow through. Or an "m"-- you'd be closing the flow of air at your lips and lowering your velum to let the air flow through. Or velar nasal, "ng," like at the end of "king," you're making a closure at the velum. But you're also lowering the volume and letting the air flow through. A glottal nasal-- you'd have to stop the flow of air down there at the vocal folds and let the air go through your velum. But if you're stopping it down there, it can't be going through your velum. So you would need, again, as I say, probably there are unethical surgeons who would modify you so that you could make glottal nasals. You'd need extra ways to get air to flow through your nasal cavity. I'm not recommending this, by the way. OK. All right. Cool. So those are some of the gaps. Oh, and I flagged a gap that I didn't talk about. Sorry, a couple of gaps. So English has interdental fricatives-- thuh and thuh. And as I said, those are cross linguistically rare, which is why if you're trying to do an accent from various kinds of places, for example, in Europe, one of the things you do is replace your th sounds with other kinds of sounds. So if you're doing a French accent, you replace your "th" sounds with "z"s, right? Or if you're doing a German accent, you replace your "th" sounds with "t"s, yeah? English has interdental fricatives. And it has alveolar stops, "t" and "d." There are languages out there that have what are called dental stops. So the alveolar stop, again, involves your tongue touching the alveolar ridge and stopping the flow of air there. For a dental stop, your tongue is touching your teeth. So you're not saying [NON-ENGLISH],, you're saying [NON-ENGLISH]. English does not have those. But there are languages out there that do. If you're studying another language, this is the kind of thing to think about because sometimes, your teacher will not be thinking about this. But you should be asking yourself, is this sound, the sound that sounds like a "t"-- is it an alveolar "t' or is it a dental "t'? Part of your job, if you're learning Tagalog, for example, is to learn to make dental "t"s instead of alveolar "t" because that's what they've got. Yeah? STUDENT: If you're thinking about linguistic "t," but let's say someone doesn't-- NORVIN RICHARDS: Have teeth? Yes. Yeah? STUDENT: How would that work? NORVIN RICHARDS: Well-- STUDENT: Would it be a [INAUDIBLE]?? NORVIN RICHARDS: [LAUGHS] I believe that is what people do, yeah. So you make a closure. We can talk about interdental fricatives, right? So somebody who doesn't have teeth and wants to do a thuh-- what do they do? I think their job is to create a turbulent airflow with their tongue between where their teeth would be if they only had teeth, right? And I think there's something similar going on with dental stops. But this is exactly the kind of thing that experimental phoneticists do is to try to figure out one of the things-- they do many things, some of them creepy, to try to figure out what's going on in your vocal tract exactly as you do this stuff. There's all this work on what people do to compensate for various kinds of obstructions in the vocal tract. So you have people bite on a block. And you put something solid in their mouth. And then you're like, OK. So now what will you do with your tongue to make the sounds as best you can? All kinds of weird stuff people do. Cool stuff. Yep. OK. Where was I with all that? OK, yeah. So yeah, that's another set of IPA symbols. These are your first IPA diacritics, I guess, those little square doohickeys under the "t" and the "d" there. Those indicate that that particular "t" and "d" those are dental. There are even languages out there that have both dental "t"s and alveolar "t's. So the Dravidian languages of India are famous for having those. And a lot of the Aboriginal languages of Australia are really rich in places of articulation. So is Dinka, come to think of it-- this Nilotic language that I mentioned briefly that has tone as a way of marking case. These are all languages that have lots and lots of places of articulation, including both dental and alveolar, but that often don't make the languages of Australia-- at least, often don't make voicing distinctions. So they have stops, but they don't distinguish voiced from voiceless. So they have places of articulation and lots of them, and nasals in all those places as well, but no distinction between voiced and voiceless. Yeah. OK, any other questions about this chart before we zoom past it? Like I say, on the website, there will be a link to charts that will look, hopefully, like this, more official charts by the IPA, which will have sound files so that you can listen to trained phonologists making the sounds. And also, at least one website, which I hope is still up, has MRIs so that you can watch the inside of a person's vocal tract as they make the sound. OK. Now we have been talking about parts of the vocal tract that English uses. Sometimes, it doesn't use them for the same things. Other languages do. So we have teeth. We use our teeth for interdentals, but we don't make dental stops. But there are places in the vocal tract that English just does not use. And yet other languages do. Here are a couple. There are what are called retroflex sounds. These are sounds in which the tip of your tongue is on your palate. So instead of a tuh, you're making a cuh-- [NON-ENGLISH]. So your tongue is curled back a little bit further than it would be for a "t" And it's making a closure, if you're making a stop, right there. So you can make stops there-- [NON-ENGLISH] or [NON-ENGLISH]. You can make fricatives there, like [NON-ENGLISH] or [NON-ENGLISH]. And you can make nasals there, like are [NON-ENGLISH],, yeah? Retroflex sounds are very popular in India, and Australia, and Indonesia. They're all over the place. Yeah? STUDENT: Um, can there be-- there's a retroflex lateral, too, right? NORVIN RICHARDS: [NON-ENGLISH] Yeah, mm-hmm. Yeah. Well, we'll get to laterals, but yes. Yes, there is. Yeah. Yeah. OK, those are retroflexes. Uvulars are kind of like "k" except more so. So for "k," your back of your tongue is hitting the back of your mouth. It's touching your velum. That's a "k" sound-- [NON-ENGLISH]. For a uvular, your tongue goes further back. And it gets at or near your uvula, which is the little doohickey that hangs down there at the back. That's your uvula. So this is a sound like [NON-ENGLISH],, or [NON-ENGLISH],, or [NON-ENGLISH],, or [NON-ENGLISH],, or [NON-ENGLISH].. These are all uvular sounds. We do not have these in English, at least when we're feeling well. But there are languages out there the do. You're working on a problem set that involves the language Inupiaq. And that letter "k" is a symbol for a uvular stop. Inupiaq has a uvular stop. It has one at the end of the name. The Inu at the beginning means people. And piaq is a suffix meaning something like real, or normal, or regular, right? So the Inupiaq are the normal people. The rest of us are abnormal, yeah. So yeah. Uvulars-- reasonably popular. Let's see. They're all over the Indigenous languages of the Americas. They have them in Arabic. Yeah, they're not exactly rare. But English does not have them. The uvular voiced fricative-- so the voiceless fricative, the "kh," "kh," is found, but not usually common in Europe. The "gh" "gh"-- the voiced uvular fricative-- is one of the ways to pronounce an "r" in languages like French and German, right? So if you're pronouncing the French word for "red," one of the ways to do it is with this fricative, to say "rouge"-- "rgh". That "rgh" sound is a voiced uvular fricative. There are other ways. You can also do a uvular trill-- [TRILLS]---- where you get your uvula to flap in the breeze, but not everybody does that. Yep. Questions about that? Accents? OK. And then there are also pharyngeals. Pharyngeals involve constriction near the pharyngeal wall. Arabic has these. The Berber languages have these. These are like, "ah, hah, ah, hah," or "ah, ah." You're getting the back of your tongue to get against the back of your vocal tract. Does anybody here speak Arabic? OK, so if you know anyone who speaks Arabic, get them to speak to you for a while and you'll get to hear them saying this. As I say, there will be sound files where you'll get to hear people making these. So English doesn't have these. But there are languages that do. There are pharyngeal fricatives. Those are both pharyngeal fricatives. OK, so slightly larger chart with more symbols on it, including some of the ones we've talked about-- pharyngeals, uvulars, retroflexes. And for some reason, the dental stops are still red. I don't know why. Have to fix that. Yep. This chart. OK. OK. Now people keep asking me about sounds that I've been carefully avoiding, so let's talk about them. There are what are called approximants. Approximants are not stops, and they're not fricatives, and they're not nasals. They involve your articulators vaguely gesturing towards each other in some part of your vocal tract, not enough to make-- definitely not making contact, and not enough to cause any turbulence in the airflow. So if you think about a "w," let's say, in the middle of a word like "away," that's not a stop. And it's not a fricative. And it's not nasal. It's bilabial. You can feel your lips engaging as you make the "w" in the middle of "away." It's a bilabial sound. But it's not a bilabial stop or a fricative. It is a bilabial approximant, yeah? All right. So similarly for the "y" sound at the beginning of "year" or "yard," and the "l" and the "r" at the beginning and end of, I don't know, "layer" or "rail," yeah? Those are all approximants, yeah. They are sometimes divided into glides and liquids. And I'm hoping that nobody will ask me how you know whether something is a glide or a liquid. Are you about to ask me how you know whether something is a glide or a liquid? [LAUGHTER] Go ahead. Is that what you're-- STUDENT: Well, yeah. NORVIN RICHARDS: Yes, OK. All right, fine. [LAUGHS] So a glide-- sometimes, it's as though a glide is an approximant that if you were to hold it longer, it would be a vowel. So a "w"-- if you just freeze yourself in "w" space, you're making an "oo," right? A "w" is like an "oo" sped up. We're going to talk about vowels later. But if you think about what an "oo" is, it's a held version of a "w." And similarly, a "y" sound, yuh, is a sped-up version of an "e" as opposed to an "r" or an "l," which are just something else. Because I have not given you a good way of distinguishing glides from liquids, you can trust me that I will never ask you to distinguish them in a way that will make a difference for grades or anything like that. You will never see me saying, no, you're wrong. "That's not a glide. It's a liquid."-- I may say that. But there won't be minus 5 next to it. Yeah? STUDENT: If the "w" and the "y" sounds are both just shortened vowel sounds, then why is it necessary to have these as separate symbols? NORVIN RICHARDS: Yeah. STUDENT: Just like the "ch" having two-- NORVIN RICHARDS: Having two letters. That's a really good question. So why don't we just use the letter "u" for the bilabial glide, for the "w"? And let me see if I can come up with a good answer to that. Eventually, we're going to get to-- so far, all we're doing is talking about sounds individually. And eventually, we're going to start talking about sequences of sounds. And we're going to want to be able to talk about restrictions on the ways in which sounds get to interact with each other. And so it's going to be useful, at that point, to be able to distinguish, for example, consonants from vowels. And once we do that-- and you should feel free to call me on this if it doesn't come up later. I'll try to make sure it does. It'll turn out to be useful to be able to think of the consonant version of this sound, the "w," and also the vowel version of this sound, the "u." But you're right that there are plenty of sound changes where you see them converting back and forth between each other. That's a thing that happens, yeah. So yeah. But that's the prima facie reason for distinguishing them because there are rules for the distribution of sounds for which it's useful to have that distinction. It's like anything else, I guess. We make that distinction as long as it turns out to be useful for explaining stuff. Yeah. Good question. Yeah? Good, all right. So those are glides and liquids, OK. And then I recall we were asking about this before. Sounds like chuh and juh-- I think I said you could think of chuh as a stop, a "t" followed by a fricative, shuh, or juh-- the juh at the beginning and end of a word like judge as being a stop, a "d" sound followed by a fricative, a zhuh. There's a little bit of a debate about whether what I just said is the right way to think about this or not. I think the standard way to think about it is not exactly that. What people do is say, yeah, there's this package deal, an affricate. So there's a name for this kind of thing. An affricate is this sequence, something like a stop followed by a fricative. And the arguments about whether or not to make this move, about whether to say, no, that's a single sound with a complex motion as opposed to saying, no, that's two sounds right next to each other has to do with the kinds of considerations I was just talking about. When you're trying to figure out-- what's the best theory of, for example, what sequences of sounds are allowed in a syllable in a given language-- sometimes it's useful to be able to say this is a language that doesn't ever, ever allow, say, a syllable to end with a stop followed by a fricative. Oh, but it's OK for it to end with chuh. So we'll just treat that as a single package, right? We won't treat it as a stop followed by a fricative. That's the move people make, yeah. So terminology. OK? All right. So I've made the table whiter. There are glides like wuh and yuh. There are some other glides there which I can try to read to you. What would a labiodental glide sound like? What's a labiodental fricative, a voiced labiodental fricative? Vuh, right? Like a "v" sound. So can you try to make a labiodental glide maybe between vowels? It's going to be something like [NON-ENGLISH].. English doesn't have those. I believe Hindi does. So there are languages out there that have labiodental glides. Similarly, there are velar glides, [NON-ENGLISH],, where your tongue is just vaguely gesturing in the direction of your velum, yeah? I like the IPA symbol for the velar glide. It looks like something out of Tolkien. Yeah. OK. All right. So we've done consonants. We have not done all the consonants. So what I'm going to do is show you some vowels. And then we'll circle back and look at some particularly exotic consonants probably next time, probably not today. So I want to start talking about vowels. So let's go through the vowels systematically. Compare the vowel in the middle of bead and the vowel in the middle of bad. And here let me just warn you that the IPA becomes particularly forbidding when we get to vowels in English. And it's not the fault of the IPA. It's the fact that English has a very large number of vowels and a not-very-good system for writing them. This is one of the things that makes English spelling so difficult that we can actually have competitions where you watch people spell. This is a thing that in many languages would be impossible. If you tried to do that in Finnish, the spelling bee would just never end because every word is pronounced exactly the way it's spelled. We don't do that in English. So here are two vowels. And those are their IPA symbols. The vowel in "bead" and the vowel in "bad"-- and now so everybody joined me in transitioning from one of those vowels to the other. Go ee-ah, ee-ah, ee-ah, OK. [LAUGHTER] You guys sound good. What are you doing? You're doing what I asked you to, but what's happening in your mouth? Yes? STUDENT: The second vowel is open. NORVIN RICHARDS: Yeah, the second level is more open. You're opening your mouth a little bit more. What are you doing with your tongue? STUDENT: Releasing from the top of your mouth. NORVIN RICHARDS: Yeah, it's lowering. It's going from the top to the bottom. Yeah, I think you guys are both right. So for "ee," your tongue is tense. And it's up there at the top of your mouth. And then for "ah," your tongue drops, right? And in fact, it drops so far that it drags your jaw down with it, right? Maybe there's a more reasonable way to say that. You lower your jaw so that your tongue can go even further down. Yeah? So one way of classifying vowels is in terms of height. So we talk about the high vowel, like "heat," and the low vowel, like "hat." And there are vowels in-between, like the vowel in "hate." That vowel is called mid. So we have high vowels and we have mid vowels. And then we have low vowels. Yeah. OK. Now let's do another comparison. Think about the vowel in "he" and the vowel in "who." So everybody go ee, ooh, ee, ooh, ee, ooh. What's going on in your vocal tract as you do that? What's the difference between "ee" and "ooh"? STUDENT: Your lips? NORVIN RICHARDS: Your lips. Your lips are definitely involved. Yes. So for "ooh," they're like this, right? Your lips are rounded. For the "ooh," that's absolutely right. Yeah. And then for "heat," they're not. Yeah? In fact, that's why when you take photographs of people, you have them say something with an "ee" vowel in it, like "cheese," just to get them to not round their lips. But you're doing something with your tongue, too, as you go from "ee" to "ooh," ee, ooh. What are you doing with your tongue? Yes? STUDENT: Moving it forward [INAUDIBLE]?? NORVIN RICHARDS: Wow. So let's start with "ee." Where is your tongue? It's in your mouth, but where is it pointing? It's high, right? "Ee." And then for "ooh," where does it go? It moves, right? So you aren't just rounding your lips and leaving your tongue where it was. Yes? STUDENT: I noticed for "ee," my tongue is also between my molars. NORVIN RICHARDS: Yeah. Ah, yeah. I see what you mean. Yeah, yeah. I don't know about you guys. But for me, for "ee"-- yeah? Yeah, for me, for "ee," it's at the front. And it's high. And then for "ooh," my tongue curls backward. And it avoids my molars. You're absolutely right. And it curls backwards so that it's hiding back in the back of my mouth. I think that's the thing you were saying just now. Yeah? Is that the experience people are having? Everybody do some more ee, ooh, ee, ooh. Feel your tongue moving back and forth. Ignore your lips, right? And think about your tongue. Yep. Yep. OK, so we have high, mid, and low vowels. But we also have front and back vowels. So the back vowels are vowels like the vowel in "who" and the vowel in "hoed," and the vowel in "hot," "ah." So "ooh" and "oh" and "ah"-- those are all back vowels. Your tongue is pulling back toward the back of your mouth. Yeah. OK. And then this is the other point that you guys made. Some of these vowels are rounded. So in English, the back non-low vowels, the back high and the back mid vowels, are rounded in who and hoe. Your lips have to round to make those vowels. But for hot, your lips don't round. Yeah. And a bunch more symbols. Some of them aren't so bad. The letter "u" for "ooh"-- that's pretty good. And the letter "o" for "oh," yeah. And then we have that ash symbol there, which is from Old English for "ah" in "had." And then "i" and "e" and "a," stuck with those. Yeah? We have standard-issue European values for some of those symptoms, OK? Now in this chart, I've only got six vowels. You may have been taught-- how many vowels were you taught English had if you went to school in English? Yeah? STUDENT: Five. NORVIN RICHARDS: Five. It's supposed to have five, right? It does not have five. It has 14. And why are we taught that it only has five? Why do we only have five letters for vowels? Who gave us this alphabet? The Romans, right? Yeah. And in Latin, there in fact are five vowels, which can be either long or short. So this is a perfect alphabet for Latin, yeah? But then we got hold of it. And so we've ended up with 12, 14 vowels. Different dialects of English are different. And we have five letters to spell them with. And this is why we have spelling bees, yeah-- one of the reasons. OK, so I've written six of our five vowels here on this chart. And then we have more. So think about "ooh" and "uh," in "who'd" and "hood," or "ee" and "ih" in "heed" and "hid," or "ay" and "eh" in "raid" and "red," or "oh" and "aw" like in "coat" and "caught." There are various ways of talking about this distinction. But one way is like this. We say that there's a distinction between what are called tense vowels and what are called lax vowels. So "ooh" is a high back rounded tense vowel. And if you go back and forth between "ooh" and "uh," ooh, uh, ooh, uh, ooh, uh, the idea is basically what you're doing is relaxing. So your tongue is not quite as rigidly high and back as it would be for an "ooh" when it makes an "uh." And your lips are probably not quite as tensely rounded, either. Your whole body relaxes, yeah? Same deal with all these other pairs. It's a fact about English lax vowels that there's a restriction on their distribution. English monosyllables don't end in lax vowels that are either front or high. So we have words like "flee" and "flu" and "flay." Those are all words in English. "Flay" means to remove the skin from. But we don't have those three words at the end there. I've given them stars to remind us that those are not words that we could have in English. Anybody want to attempt to pronounce them? Yeah. STUDENT: Wait, which ones-- I was going to ask a question. NORVIN RICHARDS: Oh, sorry. You were going to ask a question. Go ahead. STUDENT: What about schwa? NORVIN RICHARDS: We haven't gotten to schwa. STUDENT: OK. NORVIN RICHARDS: Yeah, yeah. There are more vowels. Yeah. Those last three-- if they were English words, they would be "flih," "fluh," and "fleh," right? Well, we don't have words like that. English doesn't have words that end in "ih," or "uh," or "eh," with the possible exception of "meh." So if we don't count that as a word, then English doesn't have monosyllables that end in these vowels, yeah. OK. And then you wanted to know about schwa. Those are sometimes called central vowels. Sorry, the first vowel in "machine" is a schwa. It's not higher back. It's not front. It's not high or low. It's mid and central. And then "dove" [the bird] is pretty similar to schwa and is sometimes represented with that wedge shape there. In fact, it's called a wedge. Not all speakers of English distinguish schwa from wedge. I do. So for me, the vowels in above are two different vowels-- above. But there are speakers of English for whom they're the same. Similarly, let me go back to an earlier slide, not every dialect of English has all of these vowels or has them all in the same places. So I speak a dialect of English in which "caught" and "cot" are different, have different vowels in them. Does anybody pronounce those words the same? Yeah? STUDENT: I pronounce them the same. But I have a question. You said that they can't-- English monosyllables can't end in lax vowels. What about "law"? I know that ends in-- NORVIN RICHARDS: Yeah. STUDENT: [INAUDIBLE] NORVIN RICHARDS: Yeah, yeah, yeah. So what did I say? They can't end in lax vowels that are either front or high. STUDENT: Oh. NORVIN RICHARDS: So yeah, you're absolutely right. There are ones ending in "aw," like "law." Yeah, absolutely. Or "flaw." There are people who pronounce "caught" and "cot" the same. For me, those are two different vowels. Yeah, can you pronounce them for us? STUDENT: "Caught" and "cot." [pronounced identically] NORVIN RICHARDS: OK, cool. Did you say them in the same order or a different order? STUDENT: It didn't matter. NORVIN RICHARDS: It doesn't matter. OK, good. [LAUGHTER] I pronounce those three words-- "merry," "marry," and "Mary"-- I pronounce them the same. There are dialects of English that have different pronunciations for some of them. Does anybody pronounce these words differently? Yeah? STUDENT: "Merry," "marry," and "Mary." NORVIN RICHARDS: Mary, yeah. So there are people for whom the vowel in the last one is more of an ash. It's like a "Mary." I associate that with New Jersey, where-- you're from New Jersey. Yeah, yes. I'll pay you later. OK, yeah. So OK, we've now rocketed through a bunch of consonants and vowels. We're just about out of time. Let me just give this to you as an exercise. And then we'll stop. Anybody want to try to pronounce the first of those? STUDENT: Sells. STUDENT: Oh, "She sells seashells." NORVIN RICHARDS: "She sells seashells." Yeah. I'll tell you what. We'll do this exercise next time. So next time, we'll do this as the first slide. And I'll leave it on the slides. You can work on it at home and do some fooling with it and try to get to where you can read it. As we go along, I'm going to be asking you to read things in IPA. So I'll start putting IPA on the slides more and more. So start trying to familiarize yourself with it and get to where you're familiar with at least the symbols for sounds that we use in English. Do you want me to read the rest of them? I'll do some more IPA. OK, so what's the second one? STUDENT: "Sue says he's a bad egg." NORVIN RICHARDS: "Sue says he's a bad egg." Yeah, you guys are fast learners. And what's the third one? STUDENT: [INAUDIBLE] NORVIN RICHARDS: Excellent. And what's the last one? STUDENT: "Top chopstick shops stock top chopsticks." NORVIN RICHARDS: Yeah. "Top chopstick shops stock top chopsticks." Got that from a book of tongue twisters. OK, good. So let's leave it there. And we'll come back to this next time. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_24_Endangered_Languages.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: OK, now do you have any questions now that there's been a little more delay? Yeah? OK. [SPEAKING WAMPANOAG] So one of the things I was telling you just now was that the language that I was speaking had spent a long period of time, many years, in fact, more than a century, not being spoken at all. So the Wampanoag are engaged in trying to get their language spoken again. But they're fighting an uphill battle because they don't have native speakers to work with. They're having to work their way through documents to try to figure out how their language is spoken. In being an Indigenous language of the US that has spent some time not being spoken at all, Wampanoag is not at all unusual. So all of the Indigenous languages of America are in some level of danger. UNESCO has a series of classifications for languages. There are sort of different levels of endangerment for languages, ranging from languages that are vulnerable, that is, languages in which there is some other dominant language around, but there are still children learning to speak the language-- that's the least level of concern-- all the way down to critically endangered languages. Those are the languages-- (I don't care that this Mac cannot connect to iCloud. Tell me later!) Those are the languages in which the youngest speakers are elderly. And they don't speak it all that often. And there are these ranges in between, having to do with how basically, how secure the chain of transmission to the next generation for the language is. All of the Indigenous languages of this country are somewhere on this list. So they range from the languages that are merely vulnerable, all the way down to the critically endangered languages, which is most of them. So most of the Indigenous languages of this country are in some level of trouble. Here's a map showing the counties in which-- this is census data from 2006 to 2010-- showing the counties in which there are numbers of speakers of Indigenous languages in different places. You can see there are a few places where there are large numbers of speakers. By far, the largest native language in this country is Navajo, which depending on how you count, has somewhere around 120,000 speakers. The other success stories all have numbers of speakers ranging in the 10 to 20,000. So you can see some of the examples on the map there. You can also see that some of these languages are spoken in discontinuous places. So Cherokee, for example, has two communities where it's spoken, one in Oklahoma and another further in the East. If you know anything about American history, you know why. So this has to do with the Trail of Tears, in which the Cherokee in the early 1830s were compelled to walk about 5,000 miles from their traditional homelands, which were in Tennessee and northern Alabama and northern Georgia, that area all the way to Oklahoma. Many of them died along the way. So not all of them went, I guess, which is why there are Cherokee speakers in two different areas. So I just said, of those languages, the largest is Navajo. But although counting speakers is the standard thing you do to figure out the health, some kind of gauge of the health of a language, it's sort of a tricky thing to do for several reasons. One is that it's difficult to find out how many people speak a given language. Particularly for languages that are Indigenous or that are endangered, people have a powerful motivation to want to say that they can speak the language, even if they can't. For a lot of these languages, there are also forces in the other direction. So a history of people being punished traditionally, being taken to residential schools and that kind of thing, in attempts to stop them from speaking their Indigenous languages have led people to want to lie in the other direction and say that they don't speak a language, when in fact, they do. This-- all of this makes it hard to count. But however you count, Navajo is definitely the largest. But that doesn't make Navajo safe. So first of all, 120,000, is the upper range. If you're guessing how many Navajo speakers there are, it's somewhere in 170,000. That means that there are roughly as many speakers of Navajo in this country as there are of Malayalam or Albanian. So it's by far the largest Indigenous language. But it's not large. So it's not English large or Spanish large. It's one of the larger languages, but not huge. And within the Navajo reservation, the Navajos have been keeping track of how many Navajo speakers enter, how many Navajos enter kindergarten able to speak Navajo. And those percentages have been dropping ever since they started keeping records. They're now somewhere less than half, which is bad news for the future. Even if the language is currently large, it doesn't mean it will continue to be large. I've been talking about the United States, but the situation in the United States is not particularly unusual. So actually, if you just count languages, if you look at the languages of the world, most of them are endangered. So not the ones that you've heard of-- not English or Spanish or Mandarin. But if you count languages, there are something like 6,000 or 7,000 languages in the world, and the general expectation is that we will lose at least half of them, and more pessimistic people think more like 90% of them, once this century is over with. So here's a place to pause and ask-- we are in the middle of a shift. There are lots of languages in the world right now, and there are going to be far fewer soon. So the world is changing from a world with a comparatively large number of comparatively small languages to a world with a comparatively small number of comparatively large languages. I hope I got the small and larges right as I said that. I think I did. Bless you. That's what's happening. How many of you think that this is a good thing? How many of you think that this is a bad thing? How many of you think that the question is too simple? So those of you who think that the question is too simple or those of you who think that it's a bad thing, this is the part where I would like to hear from you. Are there-- and I always do this at the beginning of class. Where the heck-- someone is hiding the chalk from me. Not me, specifically, presumably, but from anybody who wants to teach here. Wow, they're doing a better job of it today than they usually do. So, usually, if I'm persistent, I can find some chalk, but not today, no. Wow. So no chalk for me today. I will just have to do today's class by interpretive dance. Wow. OK, no, I haven't hidden it under my stuff or anything like that. That's strange. OK, so let's just talk, then. Those of you who think that this question is too simple, who thinks that there might be good things about-- oh my TAs are gesturing. Yeah, if you wouldn't mind, that would be great if you can find some. But for now, so Enrico is on the case. He'll bring me some chalk, if he can find it. What the heck? Yeah, so those of you who think that there might be good things about language loss or those of you who think that there might be bad things about language loss, let's hear from you. What do you think? Why do you think what you think? Anybody wish to offer an opinion? Yes? AUDIENCE: A lot of [INAUDIBLE] stories just get lost in translation. NORVIN RICHARDS: So a lot of stuff-- I'm sorry, can you just say that again? AUDIENCE: Stories can get lost in translation. NORVIN RICHARDS: Stories can get lost in translation. Yeah, I think that's true. I have a favorite example of that, actually. Have I told you my story about the movie Sleepless in Seattle? This is one of the dangers of teaching similar classes is that you tell the same jokes over and over again. You lose track of what you've said. So when I was-- you're much more sophisticated than when I was your age. When I was your age, my thought about that was always, no, but stuff can be translated. I mean, I can't read Russian literature in Russian, but I can read it in English. And so if Russian were to vanish from the world, then we would still have the literature. I think there genuinely is a phenomenon of stuff getting lost in translation, and I have a favorite example, which is from the movie Sleepless in Seattle. Has anybody here seen the movie Sleepless in Seattle? You have, wow, someone who loves the classics. So it's a romantic comedy from the previous century, and I'm about to spoil the ending, so if you're thinking about watching it, just know that you're about to-- I'm about to spoil the ending, but I don't feel too bad about it because it is a romantic comedy after all. And so it's not like it's ever unclear what the ending is going to be. So the end of the movie-- the movie is about Tom Hanks and Meg Ryan's characters. They are destined for each other, but they live on opposite coasts, and so lots of wacky hijinks have to ensue before they can actually meet. At the end of the movie, they're meeting for the first time, really. They're having their first real conversation, and they're at the top of the Empire State Building, and Tom Hanks' son from his previous marriage is there too. And after they have this brief conversation in which they've just met for the first time, Tom, at some point, says, "Well, we better go." And you see Meg Ryan interpret that as meaning that he and his son are about to leave and go somewhere, and it was nice meeting her, goodbye. And then he says, "Shall we?" And holds out his hand and she takes his hand and they walk off together. They go into the elevator. You can see Tom Hanks' son grinning broadly and the music comes up, and that's the end of the movie. There now I've spoiled the end of the movie for you, sorry. So that moment, where he says "We better go," and she misinterprets him as meaning "My son and I are leaving now and you can stay here," and then it turns out that what he means is "All three of us had better go," that's a moment that relies crucially on the fact that English pronouns are vague, that the first person plural pronoun in English just means I, the speaker, and some other people. And it's vague about whether you, the person I'm talking to, is included in the group or not. And there are languages out there in which that is not true. So in Lardil, for example, the language from Australia that I worked on, there's a distinction between what's called inclusive and exclusive we. So if Tom Hanks means to invite Meg Ryan along later, he's going to use the pronoun "lag-ri-mol." And if he means to leave without her, it's going to be him and his son, he'll use a completely different pronoun, "yeddi." Lardil actually has eight different words for "we." It's possible to be extremely precise about who's in the group or not. There's another word for "we" he could use to suggest to her that she and he should now leave and his son can stay here at the top of the Empire State Building, which-- that would have been a different kind of movie. So if you imagine a different kind of world in which Lardil had taken over the world, everyone spoke Lardil and English was this endangered language that was down to a few speakers and we were looking through its literature, which included this movie, Sleepless in Seattle, and we were going to see whether we could translate it into Lardil or not, well, this scene can't be translated. So you want an example of something that gets lost in translation, here's an example. This scene relies crucially on an ambiguity. So the fact that the English word "we" is ambiguous in a way that can't be replicated in Lardil because Lardil has two different pronouns for those things. Does this make sense? And so if they're both fluent speakers of Lardil, and if she's paying attention, and-- there's just no way to replicate this problem. So this is my pet example of something that is lost in translation. What gets lost in translation is all the stuff that skillful writers and storytellers do with relationships between words, basically. So cases of ambiguity or similarly phonological relationships between words-- so rhyme and alliteration and everything else that people do to create verbal art of various kinds-- none of that stuff survives very well. You can use footnotes to try to explain it to people, and that will work about as well as footnotes generally do. So yes, if you're skeptical, as I was when I was your age, about the concept of anything being lost in translation, then yes, things can get lost in translation. I agree. It's a very long response to that point. Anybody else wish to make points about this? Not about being lost in translation, but about language loss? Pro? Con? Yes? AUDIENCE: I don't know whether this is a pro or a con, It's sort of the same thing. Languages are often used as reflections of what a community sees and thinks about the world and the culture that they come from and how they interact with the space and it kind of ties into the translation thing, where it's really difficult to capture that relationship that a certain language or culture has in the world when you're translating it into a different language. And if the original language disappears, then you sort of lose an entire perspective on life and events. NORVIN RICHARDS: Thank you so much. AUDIENCE: Which feels like a really significant loss to our general knowledge of the world. NORVIN RICHARDS: So I'm going to call this "linguistically encoded cultural attitudes" or something like that. I don't know how to phrase what you just said, which you said so nicely that I don't want to try to put it in a bullet point on the board, and yet I feel that I have to. So we're talking now about some of the things that are lost-- bad things about language death. So we need a nice, neutral term for the phenomenon that we're talking about, so I'll call it language death, which is a nice, neutral term, it seems to me. Are there any good things about mass language death that anybody wishes to talk about? Yeah, Raquel? AUDIENCE: Well, this isn't a good thing about language death. It's just like, there are some situations where if you were, I think, speaking [INAUDIBLE] language that not many people know, then you get a lot of [INAUDIBLE] for mobility and integration with a more, I don't know, potentially profitable or helpful thing for you. If you wanted to get a job or go to a city where there's a lot more options for better living situation and learning that language, and it's like abandoning your own for the time being [INAUDIBLE] that makes your life better. NORVIN RICHARDS: No, that's a very good point. And I should have come up with something to call it other than language death. That's absolutely right. Oh, pretty. So you're talking about one of the primary causes of language endangerment, and you're absolutely right. And in a sense, the fact that there is so much language death going on is a good sign of something that's happening, which is economic mobility. That is, you're absolutely right that it is the case that people end up losing their traditional languages in part because their traditional languages are associated with an economically depressed area, and there's another language that's associated with economic opportunity. And so people end up switching to the language that's associated with economic opportunity. So am I parodying your point? I think that's the point you were making, which I think is a good one. That is something that happens. Yeah? AUDIENCE: I think the issue that's [INAUDIBLE] whether or not it's good or bad that a language dies, but the method of which this is accomplished because I think it's not unfair to say that those kind of function of natural selection. If there's no reason to speak this language anymore, there would be no advantage to know this language other than, oh, it's cool that I know this language. I can speak it with basically no one. It's going to fall out, but whether or not this is my choice because the culture just decided to adapt and assimilate with others or whether this is by force because they were put in a situation where their language would have no choice to proliferate [INAUDIBLE]. NORVIN RICHARDS: Right. I'm going to put this in the good column, because what you're doing is talking about ways in which perhaps we shouldn't be so upset, and you just raised a good point to which I want to amplify a little bit. There are cases in which languages are endangered because there was another group that deliberately attempted to destroy them. So that's-- I alluded to that earlier. So we've had this series of bloodcurdling discoveries about residential schools-- lots of graves of children at these schools where Indigenous children were sent in order to make them civilized and this is not ancient history. So there are still people alive today who are survivors of these schools and tell these awful stories about being punished for speaking their native languages. And those events have consequences for generations to come. So there are elderly people today who I have heard talking about what it was like being a small child in these places where they were afraid for their lives if they spoke their native languages, the languages that they went into the school, in some cases monolingual in, and they spent the rest of their lives being frightened to speak their native language and unwilling to speak it. They didn't pass it on to their children, so those residential schools were very effective in their stated goal of eliminating the culture of the people that they were processing. That's a disturbingly common story. So it happened in Canada. It happened in the US. It certainly happened in Australia. When I was working with the Lardil, I was working with people who remembered when there was a dormitory where children were sent and taken away from their families and kept there. One consequence of that was that the people that I learned Lardil from were all old men, because when they had the dormitory system, they were more lax with the boys than with the girls. Perhaps some of you have heard people say this-- "girls are delicate and need to be protected," in this case from their families. And so the girls were kept under lock and key and never allowed to go home. The boys were allowed to go home sometimes. And so they learned some Lardil. The women of that generation were guarded more closely than that. So that's one extreme, but then there's this other extreme, which is-- I guess you and Raquel have both now alluded to-- cases where you have a language and it's not currently under the kind of monstrous threat that I've just been talking about. But if you grow up in a community where this language is spoken, you can see that there's another language nearby that will give you more economic opportunity. And so you go for that one. That's the other kind of case to talk about. There's some kind of relationship between these cases, of course. So it's often the case that the languages that were deliberately discriminated against, attempts to deliberately destroy them-- those are, then, the languages that tend to be economically depressed. There's some kind of connection between these kinds of cases. And I guess we could ask-- so obviously, the choice to speak a different language than your native one because there's economic opportunity somewhere else seems more like a free choice than the choice to speak another language because you'll be beaten to death if you speak your native language. That's definitely more of a free choice. But it's not a completely free choice. You're being pushed in a particular direction by economic forces, it seems to me. Yeah, Faith? AUDIENCE: This is a case of my own family, not necessarily just different languages, although that did happen with my grandparents' generation because they were hit for speaking Spanish in schools. But I don't know if you'd call it a dialect or just like the way that people speak. I know my mom and just my family in general, they speak a certain way. But when they're in a more professional setting that's not with family, they drop a lot of the characteristics of their speech that are natural. And I thought that was always really weird because it was like they're speaking a different language when they're in the home environment. NORVIN RICHARDS: Great, and this connects to some of the stuff we were talking about earlier, about it being difficult to distinguish a language from a dialect, and maybe not worth it, that people have their own ways of speaking. And you're absolutely right, they feel the need to suppress them sometimes under professional circumstances. This is indeed the way that languages and dialects-- whatever those are-- become extinct. I've heard-- there's a great activist for the Passamaquoddy language, a guy named Wayne Newell, who died a few years ago now, spent his whole life working to try to keep Passamaquoddy education going in the schools to the extent that Passamaquoddy is still spoken. There's a lot to thank him for. I once heard him give a speech in which he said that he could remember, as a child, a time when-- and so when he was a child, it would have been in the '40s, I guess, or the '50s when-- so Passamaquoddy is spoken in Northeastern Maine. So he can remember, as a child, a time when racism directed at the Passamaquoddy was so bad that there was never any hope of Passamaquoddy people getting a job in the English-speaking world. So Passamaquoddy children were used as farm laborers. So they picked potatoes. So there was agricultural work for some kinds of people. But getting an office job in which you needed to be able to speak English-- that was just not in the cards for anybody from that generation. And he said, now there is still racism directed at the Passamaquoddy, but it's nowhere near as bad. And it is possible for Passamaquoddy to become nurses and teachers and work in grocery stores and so on, which is obviously a good thing. But the perverse fact is that when the racism was worse, there was a kind of protection for their language. Nobody was impelled to think, oh, if I speak English better, I'll get a good job. You weren't going to get a good job. It wasn't in the cards for you if you were Passamaquoddy. So that force was never in play. And he was saying, and now it is, thanks to something which is clearly a good thing, which is that Passamaquoddy people have the choice of a certain kind of economic freedom that their elders might not have had. You all are making good points. Anybody else wish to make a good point? Yeah? AUDIENCE: Does that 50% to 90% account for the possibility of bilingual divergence and dialects becoming [INAUDIBLE] spaces? NORVIN RICHARDS: So one of the reasons that I say approximately 6,000 languages, and then I say 50% to 90%-- there are lots of reasons. One is that, as I said before, counting languages is kind of a fool's game. It's unclear what counts as a language. And another just has to do with how optimistic or pessimistic you are. I think the people who are making these numbers are not thinking about the generation of new languages-- so further dialectal divergence. And to be fair to those people, I think that's happening at a much slower rate than the rate at which languages are becoming extinct. I guess another thing to say about that is that dialectal divergence and the creation of new languages-- it is indeed happening. There are even examples where we have it on videotape. For reasons of the calendar I ended up having to get rid of something, get rid of some topic to talk about, and I decided to get rid of signed languages, so I was going to do a day on signed languages and I had to take that out. One of the things I would have told you about is the creation of Nicaraguan sign language, which is indeed a new language. It was created in Nicaragua, hence the name. When the Sandinistas took over-- so there was a revolution in Nicaragua, and for the first time, there were schools for the deaf. So prior to that, deaf people stayed at home and did what's sometimes called home signs. So they would develop sign languages of their own, interacting with the hearing people who were around them. When schools for the deaf were first created, you suddenly had all these deaf people in one place. And there's video of this happening. They created a language. So Nicaraguan sign was a new language that was created in the 1980s, and people saw it happen. So there are cases of new languages coming into existence, but it's happening a lot less quickly than language death. Yeah, that's right. Are there points people wish to make about this? OK, cool. One other reason, maybe, to care-- there is, in Indigenous communities, very serious-- so for a lot of the economic reasons that we've been talking about, but also for some of the cultural reasons that we've been talking about, there's a long standing problem with high suicide rates among the youth in a lot of Indigenous communities. There was one study-- this was from 2007. I don't know if it has been followed up-- in British Columbia up in Canada, which established, I think to everybody's satisfaction, that there was a correlation between having a language that was comparatively strong-- so British Columbia has many Indigenous languages, all of which are in some level of danger, but the communities in which the language was comparatively strong were the languages in which the youth suicide rate was lowest. It was the communities in which the language was on its way out-- those were the languages in which it was highest. You all are all scientifically trained, so it's reasonable now to wonder about the direction of causation, so whether it's the communities in which the language is strong are also the communities in which life is, in other ways, better, and so those are the communities in which the youth suicide rate is low, whereas if the language is on its way out, those are the languages in which the young are more likely to feel that their lives are hopeless. Anyway, this is one kind of reason to care about this. There's arguably a life-saving possibility to having a language that is your own-- that is, being able to look at the world and think of it as a world in which, OK, maybe you are growing up in an Indigenous community. Maybe it's not an economically vibrant place, but you have something which is yours and in which you are the expert. There's apparently something about that, which is, as I said potentially life-saving. So just another thing to think about as we think about this. Thank you for talking with me about this. I always feel weird asking people to speak frankly their opinions about this kind of thing. But in the years that I've taught this class, I felt less weird because I discovered that MIT undergrads are willing to speak frankly about their opinions, even if it's clear what their professor's opinion is, because the fact is that it's a very complicated question. There are all kinds of factors interacting with each other, and I think we've sketched some of them in a way that's useful. One thing that we're doing in our department that I wanted to tell you about-- we have a master's program in our department. It's called the MITILI, the Indigenous Language Initiative, which I like to pronounce "mightily." It's a master's program for people from communities with endangered languages who can come to our department to get master's degrees in linguistics, the theory being that getting master's degrees in linguistics, the process of getting educated in that way, will help them with the creation of pedagogical materials, if that's what they want to do, or help them have the tools to analyze their language the way a linguist would. So this program has been running for a number of years now. We currently have a Navajo student in the program and a Mi'kmaq student in the program, had a number of Wampanoag students and Passamaquoddy. We had an Inupiaq student. Next year, we have a student coming who's from a minority group in Bangladesh, the Marma, who speak a Tibeto-Burman language, which is severely underdescribed. So she's very concerned with the future for the language and wants to come and get some tools to help work on it. So this is one kind of thing we're doing in our department about this worldwide fact. I guess one of the reasons that I'd like to spend today talking about this in this class is that when I tell people I have had this experience, that I tell people that there are many endangered languages, and the reaction I sometimes get is surprise. People will say, oh, I've heard of endangered species, but I didn't know there were endangered languages. So I didn't want any of you to come out of this class being like that. So all of you should know. There are many endangered languages-- in fact, most of them. Shifting gears, I started this class by talking to you for a little bit in Wampanoag. I wanted to tell you a little bit about how that work has gone, like why I think I know how to pronounce Wampanoag words, even though we have no audiotape of native speakers pronouncing it. The big sources for Wampanoag-- there is a Bible translation into Wampanoag. It is the first Bible that was ever published in this hemisphere. It's a Wampanoag Bible that was published in Cambridge. First edition was in 1663, and then we work with the second edition, which is from 1685. There's a colleague of mine here at MIT who tells me she has a friend at the Smithsonian who is sometimes called on to supply bibles for swearing in ceremonies. Like when the president is sworn in, they want the president to swear the oath of office on a Bible. And the Smithsonian is sometimes contacted to provide the Bible that President Lincoln used or something like that. And that they are sometimes asked for the first Bible published in the US. They always are like, no, no, you don't want that, believe me, because it's this one. It was translated by John Eliot, who was a missionary from England who came here in the 1600s, who translated the Bible as part of an effort to convert the Wampanoag to Christianity, an effort that was successful in a lot of ways. They created a lot of Christian Wampanoag. There's a lot of documentation of the whole process, including these lists of the questions that they got asked as they were trying to spread Christianity to the Wampanoag. One of the questions, I remember, was whether Jesus Christ had been baptized by full immersion or by sprinkling. I don't know why they wanted to know that. Another was why, if baptism cleanses people of sin, when we baptize our children, they asked, why do our children begin to sin again after they have been baptized-- and sometimes immediately after they have been baptized? We don't have any record of the answers to these questions, but the questions are kind of interesting. Another question that they got asked over and over again was whether God, if the Wampanoag were to switch to worshiping God, whether God would protect them from the plague. So the Wampanoag were suffering from communicable diseases that were brought over by the colonists inadvertently, which wiped out-- it's another one of these large range of possible percentages of the Wampanoag, but there were entire towns that were wiped out completely. It's possible that the death rate was as high as 80% or 90%. It was just a catastrophe. The Wampanoag weren't savages. They had an understanding of medicine that was at least as sophisticated as that of the pilgrims in the 1600s, which is to say not all that sophisticated, but they knew some stuff about how to treat people who were sick, and all of it was useless because they didn't have any immunity to what they were up against. So one very frequent question in these lists of questions is, will God save our lives? Will he protect us from the plague? And as I say, we don't have any record of the answers, and I wish I knew what they were told when they said that. The Eliot Bible is a really interesting document. I have been reading it very slowly and carefully for years now, trying to put the language back together. It's also a document in which Eliot is clearly attempting not only to spread Christianity, but to suppress the existing religious beliefs. This is one verse that kind of highlights that for me. This is a verse from the book of Exodus, so 22:18. If you're familiar with the Old Testament, it includes a lot of confrontations between the priests of the one true God and then priests of other religions that were there that were in the Middle East at the time, in which the priests of the one true God show that their god is more powerful than the other gods around there. This is kind of a recurring theme. The priests of the other gods are sometimes called priests, but they're often called other kinds of things. They're called like witches or wizards or whatever. This is one of the verses that exemplifies the attitude you're supposed to have toward these people and you should kill them. So "Thou shalt not suffer a witch to live," or, in Wampanoag, that's [SPEAKING WAMPANOAG].. That last word, "pawaw," is actually the origin of the English word "powwow." So it refers to a person who was responsible for religious ceremony in Wampanoag-- so traditional religious ceremony. When that word was first borrowed into English, it meant what it means in Wampanoag. So the earliest English citations of the word "powwow" are all about-- the powwows were getting together and doing their ceremonies. It refers to these people who were doing traditional ceremonies, people who, today, are sometimes called pipe carriers. So that's what this verse means. It means you should kill the powwows, the people who do your traditional religions. Not only should you ignore them, but you should let them die. There's the Bible-- a bunch of other religious texts. So here's a catechism. There's a Wampanoag version of the Lord's Prayer and a bunch of questions and answers, which include things like "Why is God called"-- here's the first one. "Why is God called the father?" Well, because he created us and all people. And other texts-- Eliot created a logic textbook in Wampanoag. Eliot, I mean, whatever you thought of his goal of eliminating Wampanoag religious beliefs-- maybe it's clear by now that I'm not actually a fan-- whatever you think of that goal, he went about it in the most sensible way possible. So he preached in Wampanoag, so he learned Wampanoag well enough to preach in it. He did a lot of visiting in Wampanoag communities, probably spreading diseases as he did so, but he didn't know that. And he also ordained a bunch of Wampanoag ministers to go and spread the gospel among the Wampanoag themselves. So if he had been a different kind of person, he would have insisted that only white people educated in England could spread the gospel, but he was not that kind of person. He was the kind of person who wanted the locals to know enough to do the evangelism themselves. And that's what the logic primer was part of-- so the idea was if they learned logic, that would help them construct logical arguments and convince people to become Christians. And I don't have an example here of this, I don't think. But it's an introduction to logic, as logic was understood in the 1600s, where the examples are all syllogism that involve proving things from the Bible. It's a very interesting document, and the result is that we know the Wampanoag word for syllogism, which I wasn't expecting to. It's on this page. It's [WAMPANOAG],, which means a short speech, something like that, I guess because the point of a syllogism is that it boils something that would be a long argument down to its basics. That's why it's short. And then there are a bunch of what are called native writings. These are documents that were written by Native speakers of Wampanoag, who learned to read and write. The literacy rate at one point was comparable to the literacy rate among the whites. The native writings are mostly legal documents of various kinds. So this is a bill of sale for a house, I guess where David Oks is selling his house and all that he has in this place to Isaac Tuhkemen. And here's the date and the signature and so on. I wish that one of the native writings were a novel or a diary or something like that. But there's nothing like that. It's all stuff like this. Some of it is kind of heartbreaking to read. Here's a petition written by the people in Mashpee to the commissioners in Boston, asking them to intervene because the white people were taking their land. And it has lines in it like this one "Truly, we think it is this-- we, the poor Indians, shall soon not have any place to reside because these Englishmen--" by which they mean the white people-- "are troubling us very much." They were wrong about this, actually, as it turned out. There are still Wampanoag in Mashpee. There's one place where they still live. So we have all these sources, but the sources, of course, are documents from the 1600s, which was a time when it was the mark of a gentleman, even in English, not to spell a word the same way twice. So the spelling is pretty erratic, and so we have to do a lot of detective work to try to figure out what exactly it was that they were writing down. Some of the detective work involves discovering sound correspondences of the kind that we were talking about the other day when we were talking about historical linguistics. So we were saying one of the things that you learn is that when you have two related languages, it's possible to state these law-like generalizations-- wherever this language has this sound, this language has this sound. We can do this for Wampanoag. Here's Wampanoag compared with a closely-related Algonquian language called Delaware. You can see from these data here, in the middle there, in the brackets, I have the spellings from the actual Wampanoag documents. And then, over on the right, we have the Wampanoag words the way that we spell them today, which you can see here is that wherever Delaware has a long "ah," Wampanoag has a vowel that sure looks like it's nasal. So the vowel is getting written with an "n" or an "m" next to it. So that word for "yesterday, we think is "wa-nonk-ial." Or we have "non-pee," which is again, or "skonk," which is skunk, or it's the word that we borrowed, one of the words that we borrowed. So here are a bunch of examples where wherever Delaware has a long "ah," Wampanoag has what looks like a nasal vowel, and "aw," which we spell like this, an "o" with a hat on it. Wherever Delaware has a short "ah," well, it looks as though Wampanoag has probably something like a short "ah." So there's a vowel that gets spelled with an "o" or an "a." And so our guess is that this is also an "ah" sound. I was just complaining about the fact that the spelling is variable. Of course, the best thing would be if the Wampanoag documents were all written in the IPA. Then I would know exactly what they were spelling. The fact that the spelling is kind of variable is, in a way, the next best thing. I mean, if they always spelled this vowel with the letter a, then I'd have to worry about whether they were spelling "ah," or "aaa," or "ay," any variety of things that we spelled with that vowel. But the fact that they spell this vowel sometimes with an "a" and sometimes with an "o" kind of narrows it down a little bit. I don't have to worry so much about whether it's "ay" or "aa." It's probably something more like "ah" or "au." So where a Delaware has a long "ah," Wampanoag has a nasal vowel, an "on." Wherever Delaware has a short "ah," Wampanoag seems to have an "ah." Knowing that is handy when we're looking at words where it's hard to tell what kind of "ah" we have. So here's a very common word that means too much/excessively, and these are a bunch of Wampanoag spellings from the documents. And because the next consonant is an "m" it's hard to know, just looking at that spelling, am I looking at an "au" or am I looking at an "ah." But thanks to Delaware, we know that it's an "au." So Delaware has a related word and it has a long "ah" there, and that tells us that this is Wampanoag "wusomee." And being able to figure things out like that then helps us figure more things out about how their spelling system works. So there's this ongoing process of putting together what the heck they're trying to spell. We also think we know where stress goes in this word. That is, we think that the word is stressed on its second syllable. It's "wu-SO-mee." So it's not "WU-so-mee." And it's not "wu-so-MEE." And I want to tell you a little bit about how we think we know that because we've got a little bit of time left here. This story goes back a little further. In 1640, the first book was published in the British colonies in North America. It was published also here in Cambridge. It's called the Bay Psalm Book. And it is a translation of the Book of Psalms into English metrical verse. So here, for example, is the Psalm 1:1, the first verse of Psalms. So in the King James version, which is what those people would have been working with, you've got, over there on the left, "Blessed is the man that walketh not in the council of the ungodly, nor standeth in the way of sinners, nor sitteth in the seat of the scornful." The Bay Psalm book goes, "O Blessed man that in th'advice / of wicked doth not walk, / nor stand in sinner's way, nor sit / in chair of scornful folk." And it goes on and on like that. The point of the Bay Psalm Book-- or here's the beginning of the 23rd Psalm, which some of you may know. So "The Lord is my shepherd. I shall not want. He maketh me to lie down in green pastures...." In the Bay Psalm Book, that one goes, "The Lord to me a shepherd is, / want therefore shall not I. / He in the fields of tender grass / doth cause me down to lie" and so on. So this is another translation of the Book of Psalms into rhymed metrical verse. The idea was to create things that people could sing in church. So this was something that John Calvin had apparently proposed, the founder of Calvinism. He suggested that the Book of Psalms should be translated into English in metrical verse. I guess these were people who were a little worried that music might be sinful or might lead to sin, and so the idea was that if you allowed God to provide the lyrics, that presumably there were limits on how sinful you could be. So they had these translations. The Bay Psalm Book was hugely popular. It went through many printings. It starts with this nice cute introduction in which they say something like-- because it wasn't the only translation of Psalms into metrical verse. They say something in their introduction like, you may, if you compare us-- I'm paraphrasing-- they say, if you compare us with other translations into metrical verse, you may notice that our poetry is not as nice as theirs, and that's because we were more concerned with literal accuracy. So it's not because we're lousy poets or anything like that. It's because we were shooting for being exactly the-- accurate to the original. John Eliot, who later was one of the translators of the Bible into Wampanoag, was one of the writers of the Bay Psalm Book. So he was involved in the writing of the Bay Psalm Book-- one of several people who was. Here's the Wampanoag translation for the first verse of Psalms. You've got it down there in the middle of this page. And as you can see, down underneath it, I have a literal translation of the Wampanoag words. And the only point to make here is that the literal translation of the Wampanoag words-- it's more or less a word-for-word translation of the King James Version in more or less the same order. So on the one hand, the fact that they were sticking to the English word order most of the time is generally true in the Bible translation, not just in Psalms. Sometimes, it's kind of frustrating. You wish that you-- because we know from other Algonquian languages that Wampanoag word order must have been pretty free. One nice thing about it, though, is that when they diverge from the English word order, like when they put something in a place where it isn't in the English, that's more or less a very clear sign that the document is giving us that the English word order would have been ungrammatical in Wampanoag, which is kind of nice. So usually, you can't learn that kind of thing from a document. A document just has a bunch of sentences in it and you can go through it and figure out which word orders are common and which ones are uncommon. In this kind of case, whenever Wampanoag doesn't do the English word order, the document is actually telling us, the English word order would have been bad, which is kind of nice. So this is the Wampanoag translation. But at the end of the Bible, there's another Wampanoag translation. I'll just read it to you. It goes [SPEAKING WAMPANOAG]---- and so on and so on. It's another metrical translation of the Book of Psalms. Using the same rules, it's alternating long and short lines-- eight-syllable and six-syllable lines with the six-syllable lines rhyming with each other and the eight-syllable lines not. And it deviates from the English word order very seriously-- changes all kinds of things. So when I first discovered this, I was looking through the Bible and I found this translation. I was like, oh, this looks like poetry. What is this? And I was like, oh, it rhymes, hot dog. And I thought, oh, cool, I'm going to learn something about how stress worked. So how do you know whether to say the word for too much. I think now it's "wu-SOM-ee," with stress on the second syllable. How do I know that? How do I know it's not stress on the first syllable or the third? Eliot also wrote in English a grammar of Wampanoag, in which he laid out the principles of grammar-- very sensibly, actually. It's a really nice document. He talks about a lot of things. And one of the things he says very early on is I'm going to indicate stress because it's very important to put the stress in the right places in the words if you want to be understood. And he gives one example, the word for dog. He says the word for dog is stressed on its second syllable. It's "anum." And then he never mentioned stress again or indicates it anywhere else. And I could just kill him, except he's dead. And so when I found this poem, I was like, "Ha, here's a place where I'm going to find out where the Wampanoag put stress-- what are the rules for Wampanoag stress?? And so I started going through, starting with the assumption that he was always getting the stress in the right place, which is false, it turns out. So if you start with that assumption, you pretty quickly come to the conclusion there are lots and lots of contradictions. He's not doing that. And to be fair to him here, he is trying to translate the Book of Psalms into a language that's not his native language. He wants it to rhyme. He wants the words to be-- the lines to be the right length. He wants it to accurately represent, more or less, what's in Psalms. Meter is not-- it's somewhere on this list of priorities, but it's not the highest thing on his list of priorities. So what I eventually figured out was there is something we can learn from the metrical Psalms, and I can illustrate this for you by going back to the English ones. So here's the beginning of the 23rd Psalm in the King James version and in the Bay Psalm Book. In the Bay Psalm Book, they've made various changes. So let's think about some of these changes. Why did they change "I shall not want" to "want therefore shall not I?" Why did they do that? It's not the normal English word order. So that they ended up with "The Lord to me a shepherd is, / want therefore shall not I. / He in the folds of tender grass / doth cause me down to lie." Why did they want to change the order in that second line? Yeah, Joseph? AUDIENCE: It has to do with the positioning and the stress in the words so that fits a poetic [INAUDIBLE].. NORVIN RICHARDS: Well, so they do that sometimes. In this particular case, there's something simpler that they're doing. Yeah? AUDIENCE: I mean, it's "I" rhymes with "lie." NORVIN RICHARDS: Yeah, they're setting up a rhyme. So they looked at the line and they wanted two things that would end in the same sound, and they decided on "I" and "lie." That's it exactly, for this pair. There's another place, though, where they do the kind of thing that Joseph is talking about. So "He leadeth with me beside the still waters." They change that to "To waters calm me gently leads," which has all kinds of entertaining properties. But one of them is that the adjective "calm" comes after the noun "waters." Why did they say "to waters calm me gently leads?" Why didn't they say "to calm waters me gently leads?" And now we get to recycle Joseph's answer. So "to waters calm--" you get to put stress on the second and fourth syllables of that. Stress in English "waters" goes on the first syllable. And then you get to stress on "calm." If it were "to calm waters," well, then you'd have stress on the second and third syllables-- "to calm waters." Does that make sense? So if you're trying to get a line in which you have alternating stresses-- which is what he's doing, he wants the stresses to be on the even-numbered beats-- then changing the order of the adjective and the noun makes sense. That's what you're shooting for. Does that make sense? That's what they're doing. So I hope it makes sense because what I ended up doing was looking for things like that in Wampanoag, and I'll show you an example of what I found, and then I'll let you go. Here's an example from the King James version, from 22:16, "dogs have compassed me. The assembly of the wicked have enclosed me. They pierced my hands and my feet." And I'm bold-facing that because I want you to pay attention to it. Here's the nonpoetic Wampanoag translation of that, which is more or less a word-for-word translation of the English. There's "dog" again, and I'm in the plural, "anumwak," in the first line. So "dogs have compassed me--" that means "encircled me--" "the assembly of the wicked have enclosed me. They pierced my hands and my feet." And that Wampanoag line ends with the Wampanoag words for my hands and my feet. And that's [SPEAKING WAMPANOAG]. But in the poetic translation they changed it to "my feet and my hands"? Why did they do that? Well, they didn't do it to set up a rhyme because the rhyming lines are the second and the fourth lines. And the first and the third lines don't rhyme with each other. It's the short lines that rhyme with each other. So hypothesis-- they wanted them in that order in order to get the stresses in the right place. So if you imagine that the stresses go on the even syllables, there was something that he liked better about [SPEAKING WAMPANOAG],, which is "my feet and my hands"-- that he preferred that to [SPEAKING WAMPANOAG].. So if you put stresses on the even syllables, that's where they would go. The word for "my hands," "an-netchi-tash," that gets the same stress in either of those possibilities. So this is just comparing "my feet and my hands" with "my hands and my feet." And we're asking, where would stresses go in those two versions of the line? In the word for "my hands," which is "nuh-nuh-chi-kash," the stress is going in the same places in both of those versions of the line, so we can't learn anything about that word, where stress goes in that word, by this choice. It doesn't matter which of those things he did. He would have stress in the same place in the word for "my hands." But in the word for "my feet," what he's doing is choosing between "nuh-see-tash," stress on the second syllable, and NUH-see-TASH," stress on the first and third syllables. And he's choosing stress on the second syllable. So hypothesis-- this is teaching us that stress actually goes on the second syllable of my feet, "nuh-see-tash." So if you are crazy enough to read the poetic version of the Wampanoag songs line by line, looking for verses where the word order has been changed in a way that doesn't help with setting up a rhyme, looking for things like this, then you end up with a list of words where you have a guess about where stress goes. And if you look at all of those words, you-- and I am in fact crazy enough to do that. I did it-- you end up with a set of words for which where stress goes, and you can develop a theory of where it goes. It turns out the rules for Wampanoag stress are quite similar to the rules for stress in Delaware, which is the language I was showing you earlier. It's the kind of thing we could have hoped was true, but it's nice to see that it actually is true. I'll show you one more slide, and then I'll let you go. Wampanoag is a polysynthetic language. Here's a letter from Experience Mayhew, who was another missionary who grew up on Martha's Vineyard. He was the child of missionaries who grew up on Martha's Vineyard, by his own account bilingual in English and Wampanoag. We have one letter from him to a friend who had asked him various questions about the language. He begins his reply by apologizing for being slow to respond to the other guy's letter because several members of his family, including, I think, his wife and several of his children, had died from the plague. And he's like, "I'm sorry, I've been busy. But here are the answers to your linguistic questions." And one of the points he makes is this language goes in for a really long words. The guy had asked him, I guess, for the longest word he could think of, and the one he came up with is down there at the bottom, "our well-skilled looking glass makers--" that is, our skillful mirror makers-- [SPEAKING WAMPANOAG] So yeah, it's an extremely polysynthetic language. There's the break down for that word. Questions about Wampanoag or endangered languages or any of this stuff? OK, this is probably a good place for us to stop. So let's stop here, and I-- |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_22_Dialects.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN W. RICHARDS: I was thinking about today's topic earlier. And I thought of a metaphor. I haven't tried this metaphor before. This is the first time I have tried this metaphor on people in class. It's always dangerous thinking of metaphors for all kinds of reasons. One of them is that it's entirely possible that I'm about to beat this metaphor into the ground. I'm going to talk about it for a while. But-- so I apologize in advance if that happens. It's also possible that I will tell you the metaphor, and you will wonder, what the heck is he talking about? It will be too metaphorical. But, hopefully, as the class goes along, it'll become clear what I'm talking about. Or maybe it'll be clear immediately. Who knows? OK. Here's the metaphor. You have to try to imagine for this metaphor that the United States is the kind of country-- bless you-- in which board games are extremely popular. Everyone plays board games all the time. It's like the standard way for people to interact with each other. So if you are on a long plane flight, you'll probably play a board game with the person sitting next to you. You go-- you move into your new dorm, and you start playing board games with your new roommate and the other people on your hall. That's just the normal way for people to interact with each other-- all board games all the time. And try to imagine that there are various board games that are popular. But in the US, for some reason, the two big ones are chess and checkers. So those are the two board games that are the most popular. There are other games people play. But those are the two big ones. And the following kind of weird thing has happened. There are families in which everybody plays chess, let's say. Families specialize. So if you're born in a chess family, then mom and dad play chess. And your older brothers and sisters play chess. And your uncles, and your aunts, and your ankles-- your ankles do not play chess. Your uncles, and your aunts, and your grandparents, they all play chess. And maybe nobody ever actually sits you down and explains the rules of chess to you. But you see everybody playing chess, and you eventually get to where you're pretty good at it. In fact, by the time you're old enough to go to school, you're very good at playing chess if you grow up in a chess family. And then there are the checkers families, which are the same except checkers. So everybody plays checkers. If you grew up in a checkers family, mom and dad play checkers. Older brothers and sisters play checkers. You watch them play checkers. You become a checkers expert by the time you're old enough to go to school. So there are chess kids and there are checkers kids. And then when you go to school, the following kind of weird thing happens. The teachers act as though there is no such game as chess. So when they see kids playing checkers, they're like, oh, yeah. And they watch the kids playing checkers. Maybe they make tactical suggestions or whatever. Maybe the teachers play checkers with each other. Maybe they play checkers with the kids sometimes. But when they see kids playing chess, they say things like, what are you doing? That's not the right way to play checkers. You just took that one piece and you moved it a zillion diagonal spaces. You can't do that. You're just violating all the rules of checkers. That's not how checkers works. And what is the deal with these weird checkers pieces-- this one that looks like a horse, and this one that looks like a castle? Put all that stuff away. Get yourself a real checkers board. And play checkers like a normal person. There are tests that kids get in school where they are asked questions like "Is it OK to take a piece and move it two steps forward and one step to the right?" And the kids are supposed to say, no, it's not OK to do that, because this is a checkers test, and in checkers you can't do that. So if you're from a checkers family, this is an easy test for you because, in fact, you can't make that move in checkers. If you're from a chess family, then it's a little harder because, of course, that's how you move your knight. That's one way you can move your knight. And so maybe you made that move just this morning playing with your mom or your dad. And so kids from chess families have a tendency to do badly on this these tests, which means that when it comes time to put the kids in the checkers classes-- there are checkers classes in schools. There are no chess classes, only checkers classes. The kids from checkers families tend to be the ones who get into Advanced Checkers, and later they go on to AP Checkers where they get-- AUDIENCE: [LAUGHTER] NORVIN W. RICHARDS: --where they get college credit for their college checkers classes. The kids from chess families-- it varies. Some of them learn quickly how to play checkers and suppress their knowledge of chess. But some of them have a hard time, and it's hard for them to switch that quickly. Every so often, you will get a teacher who's maybe a little more enlightened and admits that there is such a game as chess, and who will say to the kids, it's OK. The kids who are from chess families-- they'll say to them, it's OK for you to play chess in the privacy of your own home with your family. But it's important for you not to play it in public because people will get the wrong idea. It's really important for you to be good at checkers. Checkers is the really important game. That's what the teachers will say. And the teachers have a point because this is a version of America in which checkers is what people expect you to be good at. If you go for a job interview or in political campaigns-- presidential campaigns always involve a huge checkers tournament between the candidates, political correspondents watching the game and commenting on the moves. And if somebody tries to make a chess move in a checkers game, well, then, no one will vote for them. That's the metaphor. There. Now, if it's not yet clear what I'm talking about, hopefully it will be in a second. Are there any questions about the metaphor other than, what? [CHUCKLES] OK. AUDIENCE: [? Is it a ?] fact that [? humans ?] see that checkers is [? used ?] [INAUDIBLE] will be superior [INAUDIBLE]? NORVIN W. RICHARDS: Yeah. So you have to imagine that checkers is the superior game in this version of the story. I could have told the story the other way around. But I told it this way. AUDIENCE: [INAUDIBLE] NORVIN W. RICHARDS: Yeah. Yeah. Other questions? OK. So seemingly unrelated topic-- remember negative polarity items? Negative polarity items were these expressions that you used together with negation to say things like, "I didn't see anything." And what we said about them-- so "anything" is a negative polarity item. And to call it a negative polarity item is to say it's an expression with some restrictions on where it can be used. We talked a little bit about the different kinds of quantifiers that make it possible to use a negative polarity item. One of them is-- well, negation makes it possible to use a negative polarity item. That's how they get their name. But in a sentence like, "I saw anything," there's something wrong with that. Yeah? AUDIENCE: What about, "I will eat anything?" NORVIN W. RICHARDS: Ah! So negative polarity items often have another use. Let me find a piece of chalk. For some reason, this always happens. There are many erasers but no chalk. There's actually literally no chalk. Where are the-- what the heck? "I will eat anything." This is sometimes called "free choice 'any.'" And it's very interesting. Notice that it's also restricted. So "I saw anything" is actually no good. You can't use either free choice "any" or negative polarity "any." But you're right there's another possible use of these kinds of things. Having admitted that you're right, let me just suppress that point. Yeah. You're absolutely right. There are complications. Yeah. Yeah? So, plenty of languages have negative polarity items. And maybe unsurprisingly if you are using-- if you're going to answer a question like, "What did you see?" in the language that I'm now speaking, "anything" is not a possible answer. We can be unsurprised by that in a couple of ways. One would be to say, well, yeah, this gets us back to the kinds of things we were talking about when we were talking about ellipses. So if we want to think of this as an utterance that just consists of a noun phrase-- "anything"-- then it's unsurprising that it's bad. It's bad, well, because "anything" doesn't have any negation anywhere to make it happy. Yeah. That's what NPIs need-- Negative Polarity Items. Getting back to your point-- if the question was, "What will you eat?" a possible answer is "anything." You can also do the free choice version of "any." If we want to think about this as ellipsis-- that is, if we want to think of it as, I ask you "What did you see?" and you are in some sense saying, "I saw anything," well, that's bad because "I saw anything" is bad. There's no negation in the sentence. And I guess what we're learning-- maybe you remember-- when we were talking about ellipsis, we talked about the fact that when you do ellipsis-- when you say things like, "John likes his father, and Bill does too," there's some kind of requirement that the missing phrase-- the missing verb phrase, in this case-- be the same as another verb phrase and we talked about the fact that you sometimes have to do some work to figure out what counts as the same. So we decided that this was an ambiguous sentence. "John likes his father, and Bill does too" can mean at least two things, possibly some others as well. It's unclear whose father we're talking about in this second clause. All right. So this could mean that John and Bill both like John's father, or that John likes John's father and Bill likes Bill's father. I guess it could also mean that they both like Seymour's father-- some other person. It could mean that. Yeah. So when we do ellipsis, we have something that's missing, that is understood as being the same as something that's there. And I guess what we're learning here is that if I ask you, "What did you see?", if we want to think of that as involving ellipsis, we want to think of ourselves as being required to fill in "I saw anything." We don't, for example, have the option of putting in "didn't see." And that's got something to do with the fact that the question doesn't have negation in it. It's a positive question. So a couple of ways of not being surprised by this fact. Is everybody sufficiently unsurprised? How do you answer a question like this? "What did you see?" Well, one way you can do it is by saying "Nothing." So we have another kind of expression that you use that's negative all by itself. It's not an NPI. It doesn't need to combine with negation. OK so far? I'm just describing facts about the language that I'm speaking-- not just the language that I'm speaking. There are plenty of languages like this out there. Here are some Greek data that are the same. So in Greek there's an expression, [GREEK],, which is like "anything." Anybody here speak Greek? Cool. Then I won't mispronounce Greek at you. I'm afraid I don't speak Greek. [GREEK]-- that means "anything." And it combines with the Greek word for "not," which is "then." And so you can say in Greek, literally, "He or she didn't see anything," with "not"-- [GREEK]---- but [GREEK] can't be in a positive sentence. You can't say, "He or she saw anything." And before anybody asks, I don't know how to say things like, "I will eat anything" in Greek. I don't know whether they can do that here. Yeah. So this is just to show you when we were talking about NPIs, we were talking about English. But it's not just an English thing. It's cross-linguistically very common to have NPIs and for them to behave this way-- NPIs-- Negative Polarity Items. But there is another kind of language out there. Here's a Ukrainian sentence. In Ukrainian, if I ask you, "Who did you tell?" A possible response is this word, which I won't try to pronounce because I don't speak Ukrainian. We might as well translate it as "nobody" because it's a possible response that means "I didn't tell anybody." Anybody want to pronounce this word for us? Do you-- AUDIENCE: [UKRAINIAN] NORVIN W. RICHARDS: [UKRAINIAN]? OK. Cool. But Ukrainian is different from English in that the same word, [UKRAINIAN],, can combine with their negation. So it's not an NPI. It can combine with their negation. And the result is a single negation. So if you want to say, "I didn't tell anybody," in Ukrainian, you say [UKRAINIAN]---- that same word that you would use to answer the question, "Who did you tell?" We could think of that word as meaning "nobody." But if you want to say, "I didn't tell anybody," in Ukrainian, you use that word. And you also use the word for "not." So you say, literally, "I didn't tell nobody." But they don't have-- so the word-- that doesn't mean-- so that means "I didn't tell anybody." That's what that means. Ukrainian is not the only language like this. Italian is like that. So in Italian, if I ask you, "What did you say see?" you can say in Italian, "niente." So we can translate "niente" as "nothing." But if you want to say, "I didn't see anything," in Italian, you say, "Non ho visto niente,"-- literally, "I have not seen nothing." "I have not seen 'niente.'" So this is a different system. In English, we have these-- the language I'm speaking, in Greek as well-- we have these expressions-- these negative polarity expressions-- which aren't possible answers to positive questions like, "What did you see?" If I ask you, "What did you see," you cannot say "anything." You have to say, "nothing." If we use that as a test for whether we're looking at an NPI or not-- as a test for figuring out whether we're looking at "anything" or "nothing," then Ukrainian and Italian and many other languages have a different system. One in which instead of having NPIs, you have these expressions which express negation all by themselves-- [UKRAINIAN] and [ITALIAN],, which mean things like "nobody" or "nothing"-- that can be answers to questions like, "Who did you tell?" or "What did you see?" But in question-- statements that are longer than that-- they don't have ellipses-- if you want to say things like "I didn't see anything" or "I didn't tell anybody," you use these same expressions. There isn't a specialized class of NPIs, at least not for this kind of sentence. Is that clear? Does that make sense? The second kind of language is sometimes called a negative concord language. And let me give you some background for that. Concord is a phenomenon which I think we haven't talked about in this class. There are plenty of languages out there in which, when you have a noun phrase that has multiple things in it-- say it has-- there are adjectives or there are demonstratives. There will be morphology on the noun of the kind that we have talked about a little bit-- morphology that indicates things like, this is masculine, or this is feminine. It tells you what noun class the noun is in. Or it'll tell you whether the noun is singular or plural. Or sometimes there'll be morphology telling you the case of the noun, whether it's a subject or an object or something else. So Italian is a language, for example, in which there's concord-- the word for "butterfly" in Italian is "farfalla." And the "ah" at the end marks the butterfly as being singular and also feminine. It's in the feminine noun class. It's of the feminine gender. And to say that it's of the feminine gender is to say that if you want to modify this butterfly with adjectives-- a beautiful butterfly is a "farfalla bella" So you use the feminine marker on the adjective as well as on the noun. And if you want to say, "this beautiful butterfly," well, you use that "ah" again. So "this beautiful butterfly"-- you can see there are three "ahs," all of them saying over and over again, this butterfly is feminine and singular. You don't just realize that fact on the butterfly itself. You realize it all over the noun phrase. And if any of you have studied-- this is cross-linguistically a very popular phenomenon. It's all over the Indo-European languages. English doesn't have it because we're not big on morphology in general but especially not on our nouns. But it's widespread. If you study Italian, or Spanish, or French, or German, you'll have to learn about things like this. And it's found outside Indo-European as well. So there's a Lardil example here-- [LARDIL], "this big rock." This is an accusative big rock. It's the kind of phrase you'd use to say, "I picked up this big rock." It's an object. You're going to mark the rock with accusative, but also the word for "this" and the word for "big." So this is called concord. And there's an Icelandic example there which I took from a paper-- "four little snails," which are nominative. "Four" and "little" and "snails" all have suffixes on them telling you that the snails are nominative and masculine and plural. So Italian has concord for number and gender-- so feminine and singular. Lardil has concord just for case. Icelandic has concord for all three-- case, number, and gender. So concord-- it's a phenomenon. It's a phenomenon with the following shape-- sometimes a fact-- in this case, say the fact that a noun is feminine, or that it's singular, or that its accusative is marked not just on the noun but on a bunch of things near the noun, and so the noun, but also adjectives and demonstratives and other kinds of things. It's called concord. And to say that these languages are negative concord languages is to say, yeah, in a language like Italian, if you want to say, "I didn't see anything," you're going to mark negation not just in what's called sentential negation-- the Italian word for "not" which is "non"-- yes, you'll use that word. But you'll also mark it on the object. So you will say, in effect, "I not saw nothing." That's how you do this in Italian, or Ukrainian, or Russian, or a zillion other languages-- lots of languages that do this-- negative concord languages. Now, we have talked about English as though English has negative polarity items. And that is true of the version of English that I am speaking. But there are other versions of English in which there is, instead, negative concord. So there are versions of English in which the right way to say, "I didn't see anything," is, "I didn't see nothing," in which that's the way you say this. Has anybody encountered this version of English? Did anybody grow up speaking this version of English? Yes? Yeah? Sort of. So I grew up in Alabama. I was surrounded by people who spoke this version of English. I grew up in Alabama in a university town. My parents are from further north, so I have always had the accent that I have. It made-- it was the beginning of a long life of being peculiar, surrounded by people who talk differently than me. And one of the things that I did was that I spoke negative polarity item English, not negative concord English. But we were given tests when I was a kid-- I noticed recently my son, who's in fifth grade, he's also given tests and homework assignments in which he is asked to fix problems with sentences. So sometimes the sentences-- the capitalization is wrong, or there's a word that's misspelled, or the punctuation is wrong. But sometimes the sentences have negative concord in them. And his job is to say, no, that's not the right way to speak English. You should speak-- you should have NPIs. You shouldn't have negative concord. I remember taking tests like that too. Those of you who grew up in English-speaking places, do you remember being given tests like this? So these are tests in which people are checking which dialect of English do you speak? Are you from a checkers family or a chess family? And if you're from a checkers family, these tests are easy for you. Because, well, checkers is what you play at home. Yeah, it's easy. But if you're from a chess family, then you have to remember, oh, yeah, I'm surrounded by people who don't believe in the existence of chess. They only believe in checkers. And so it's your job to try to pretend to be a speaker of this other dialect of English. To put it another way, if you're growing up in the States-- and I think this is not just true of the States. This is pretty common in a lot of places. There are a variety of ways to speak the majority language. And there's a certain set of them that are approved-- that schools approve of. And there are others that schools feel it's their job to try to steer you away from, even if it's, in fact, your native language. This is a tricky position for schools to be in. Does this ring true with your experiences in school? Do you remember doing this? There's this weird disconnect between different kinds of things that happen. Negative concord is the kind of thing that I was explicitly taught not to do. And when I was growing up in Alabama, I could see why they were teaching us not to do this because, well, I was surrounded by people who were doing it. And so you could see why the teachers felt as though they had to stamp it out. Like I said, it wasn't my dialect of English, so it was-- these tests were easier for me. There are other kinds of things that are not-- not been targeted by education. And so they have a different status. This is another one. The percentage sign at the beginning of this sentence is a mark linguists sometimes use to indicate that some speakers accept this and others don't. It's what the percentage sign is for. So you guys have seen stars on things that say they're ungrammatical-- other kinds of marks-- percentage sign says a certain percentage of English speakers can say this. So I grew up with this. It's called positive anymore. This is a version of English in which you can say things like "I used to walk to work, but anymore, I take the T." Is there anybody here who can do this? Yes? Cool. AUDIENCE: My brother talks like that a lot. And I think it has to do with the fact that in Spanish, the word "ya," and "yano," is "anymore," or "already," or "now." And it can kind of mean "anything." NORVIN W. RICHARDS: Oh, that's interesting. AUDIENCE: Yeah. NORVIN W. RICHARDS: OK. Cool. Is there anybody else who can do this? Is there anybody who is looking at this and wondering if I suffered some kind of injury on the way to class today, and this can't possibly-- yeah. OK. So I often-- when I get to this slide in this lecture, I get these kinds of looks. They're like, what? So I don't come from a Spanish-speaking background. I come from a West Virginia background. So my mother is from West Virginia. And positive anymore is a feature of the area around West Virginia and Pennsylvania. There's this place where, if you go there, you will hear people saying things like this. It means something like, "these days." "Anymore"-- so "Anymore I take the T," means, "as of now, I take the T." For me, at least-- I was doing some googling about it. It's most common in places like this where you're contrasting the modern day with the past. Yeah. So here's another place where my English is odd. And I'm very much in the minority. So there's me, and there's Faith sort of, and Faith's brother. Yeah. And the rest of you have me outvoted. But this particular feature of a certain regional version of English is somehow not on the radar of the people who write textbooks. So I don't remember any teacher ever telling me not to do this. I think people didn't notice that I was doing this. I got away with it. But, yes, this is a feature of some versions of English. Or, similarly, we've talked about other things in this class. So people vary to the-- about the extent there are phonological differences between different versions of English. People vary with respect to whether the name "Mary," and the adjective "merry," and the verb "marry" sound the same. To me, they do all sound the same. But there are people for whom the verb, in particular, is "maeh-ry"-- a different vowel. And I believe there are even people for whom all three of them have different pronunciations. This is a feature of the area around New York and New Jersey, whether you speak like this or not. I grew up in Alabama where "pen" and "pin" sound pretty similar. They're both like "pin." I grew up in Alabama where the default word for a soft drink is a Coke. So a fizzy drink with lots of sugar in it-- I think-- now, let's see-- I live in a soda place now, right? Don't-- aren't we-- this is the soda part of the world? And there's a pop part of the world. Who here grew up saying soda for those kinds of drinks? Did anybody grow up saying pop? AUDIENCE: My mom did-- NORVIN W. RICHARDS: Your mom did. AUDIENCE: --when she was a girl. NORVIN W. RICHARDS: And did she-- she survived to adulthood, didn't she? She's OK. AUDIENCE: Yeah. NORVIN W. RICHARDS: Yes. OK. Cool. AUDIENCE: [LAUGHTER] NORVIN W. RICHARDS: And did anybody grow up saying Coke for these kinds of things? AUDIENCE: Yeah. NORVIN W. RICHARDS: Oh, well, yeah, Peter did because Peter is like me. He's from the South. Where are you from? AUDIENCE: Mississippi. NORVIN W. RICHARDS: Yeah. OK. So Southerners-- represent. So in my version of English-- the version of English I grew up with, "Coke" was the default name for all of these things. It's kind of like "Xerox" for a copying machine or whatever. So people would ask questions like, "Would you like a Coke?" And if the answer was, "yes," the next question was, "OK. So would you like Sprite or Mountain Dew? Because that's what we've got." So "Coke" is just the default word for these kinds of beverages. Or-- and this is a place-- another place where languages or dialects of English vary a lot with each other-- English used to have a distinction between singular "you," which was pronounced "thou," and plural "you," which was pronounced "you." And the plural "you" was also used for formal. And then it kind of took over. So it got used both for singular and plural. And modern standard English-- the version of English that I'm currently speaking-- doesn't distinguish "you" singular and "you" plural. But it's handy being able to distinguish "you" singular and "you" plural. And so various dialects of English have come up with solutions for this. I come from the "y'all" part of the country. I used "y'all" until I came up here, and then I was mocked into coming up with other expressions. I seem to have fallen into "you guys" as the plural of you, which I wish I didn't have because "you guys" I think of as being markedly masculine. So I need to-- I don't know-- embark on a series of self-hypnosis sessions or something like that and get myself back to using "y'all." And then, there is, again, an area-- I think around Pittsburgh-- where people say "y'unz" as the-- or "you'ns" as the plural of you. Did anybody here grow up with an interesting plural for you? Yeah? AUDIENCE: Yeah. So, basically, when I lived in Minnesota, I grew up in a very small town with a very, very high Dutch population where a lot of people said "yous". NORVIN W. RICHARDS: "Yous"-- oh, yeah. I left "yous" out of here. Yeah, you're absolutely right. So "yous" is absolutely another option. Anybody else have another option? "Y'all"-- I think "y'all" is still markedly Southern, right? If I were to say, "y'all," people would think of me as being from Alabama, which I am. Raquel? AUDIENCE: I've heard my mom and my uncle, who are both from Pittsburgh, saying "yenses" but like as a joke. NORVIN W. RICHARDS: "Yenses?" AUDIENCE: I don't know if it's because they have heard people say "yenses." NORVIN W. RICHARDS: Wow. AUDIENCE: It might have just been like a possessive thing, like "Yenses family is here" or something. NORVIN W. RICHARDS: Oh, oh, OK. Yes. That I could kind of get a handle on. I thought of the plural as being "yens," but maybe there's also "yenses." This is a thing that happens, actually, that something that was already plural gets marked plural again. This is the story of "children," which plural of "child" used to be "childer." And then "-er" became a very uncommon-- in fact, "childer" was the only noun that had "-er" for it's plural. And so people just added another plural at the end that you get in "oxen." And so now it's like "childses." That's children. That's our modern word for "child." So the German plural for child, "Kinder," still has that "-er"-- that "er" suffix, [? single. ?] So big surprise-- there are different dialects of English. There are different dialects of lots of different languages. I'm bringing this up for a variety of reasons. One is that there's a model that you get exposed to-- at least in school, at least I was; let's see whether all of you were-- in which, well, there are different dialects of English, but there's a correct version of English. And then there are failures to speak the correct version of English. Like, that's the picture you're given in English classes. So you're given assignments like the one I was just talking about for my son, where it's like, fix the problems in these sentences. And some of the problems are things like, oh, this person apparently speaks a different dialect of English. We can't have that. That's wrong. We've got to fix that. Is that the picture of English that you guys grew up with as well? And so just to be clear, I mean, if we were going to figure out what counts as English, these descriptions are things that people talk about. Maybe-- and so from your reactions, I think maybe nobody had exposed most of you to positive "anymore." So don't say you didn't learn anything in this class. Positive "anymore"-- it exists. It's a thing. But these kinds of things are things that people talk about-- about there being variations in English. And people's reactions to these variations vary. So I think, probably, I never got an assignment in English class growing up in Alabama in which somebody said, fix the problem with this sentence, and the problem was, "Would you like a Coke?" "Yes, I'd like a Sprite." Nobody said, oh, this is wrong. You should change 'Coke' to 'soda' or 'pop.'" When I was growing up in Alabama, when we wanted to mock people from the North-- and it was a thing we did sometimes-- we would use words like "pop" and "soda." We thought that was funny that y'all talked this way. [COUGH] Excuse me. So there are some of these kinds of things that are not the subject of deliberate repression by the educational system. But there are others that are, like negative concord. So there are dialects of English that have negative concord-- a dialect of English in which you say, "I didn't see nothing." "I haven't done nothing." And those dialects are the subject of deliberate repression by teachers. So teachers attempt to stop you from doing that. And if you've ever heard speakers of-- I have some websites coming up on the next slide. I'll put these links up on the website. There are people who self-describe as speakers of English who I, at least, have a very difficult time understanding. So one of the websites I'll show you is to the Scots Dialect Atlas. There's this wonderful group of linguists who are going around various parts of Scotland chronicling different aspects of regional speech in various places. And these people are not speaking a Celtic language. They're not speaking Gaelic. They would describe themselves as speaking English. But, boy, you can't tell [LAUGHS] listening to some of these recordings. It's very opaque. When we-- I had a-- when I was in grad school, there was a guy in the class above my-- a really excellent linguist named Andrew Carnie, who's now teaching at University of Arizona. And he grew up in Canada, but his parents were Scottish. And they came down when he graduated. And I was sitting around with them attempting to make small talk. And I could just barely understand his mother. But his father could have been speaking Hebrew. I mean, I just-- I had no idea. And I can remember doing a lot of nodding and smiling. It was just opaque. I couldn't understand this at all. But this is a guy who would have said that he was speaking English. So-- yeah, and so here are some websites-- one to a group at Yale that's doing a study of dialect variation within English. And this other is the Scots Syntax Atlas. I'll put links to both of these up in the chat. So, I've been talking about English because here we are in America. And I had this metaphor and everything. But this is a pretty common phenomenon. There are lots of languages where there is regional variation in the way that language is spoken. And what we do in America about this-- where you fixate on particular properties of particular dialects and attempt to wipe them out, things like negative concord-- attempt, I should say, completely unsuccessfully. So negative concord has been around in English for centuries. It's still around. It's not clear that English teachers have ever had any effect on it. But they have succeeded in-- probably not just them, and society kind of generally have succeeded in making negative concord stigmatized. So we're all taught that negative concord is illogical. Two negations make a positive. Is that something you were all taught at some point? So the next time someone tells you that negative-- if you ever In casual conversation have somebody tell you that negative concord is illogical and that people who say, "I didn't see nothing," they just can't think straight-- they're not logical-- you should point them to Ukrainian and Italian. There are plenty of languages out there where this is what you do. It's just that there's a dialect of English that's like them. So-- but this is a question that comes up when people are doing things like trying to count languages, for example. so if you're trying to figure out how many languages are spoken in the world, or if you're trying to make policy decisions about how many languages are spoken in the country, or how many languages a government document ought to be translated into, for example, you have to make decisions about how many languages there are-- where to draw lines. What counts as English? What counts as Japanese or German? How do you know if two people speak the same language? What do you think? What's the answer to that question? What should we do? Are people clear on what the question is? So what's-- how do you decide if two people are speaking the same language or not? Yeah? AUDIENCE: My first guess is if two speakers-- if it's mutually intelligible, if one can understand the other when they talk and vice versa, then they speak the same language. NORVIN W. RICHARDS: That's a popular answer. Are there-- yeah? AUDIENCE: As a counter argument, I had a friend in middle school who I spoke German too, and he spoke Italian back to me. And I knew a little bit of Spanish, and he knew a little bit of Polish or something, so we could kind of understand each other. NORVIN W. RICHARDS: Mm-hmm. AUDIENCE: [? Even ?] [? though ?] [? it ?] [? wasn't ?] in the same language. NORVIN W. RICHARDS: Yes. [LAUGHTER] That is an awesome story. [LAUGHS] Yes? AUDIENCE: Yeah. If I run into someone who speaks Portuguese, I can get a sense of what they're saying. I don't speak Portuguese, but I do speak Spanish. NORVIN W. RICHARDS: Mm-hmm. Yeah. So this is-- I was going to use Spanish and Portuguese in just a second. What I had heard about Spanish and Portuguese was-- and I heard this from Portuguese speakers, so I don't know whether it's true. Portuguese speakers believe that Portuguese speakers are better at understanding Spanish than Spanish speakers are at understanding Portuguese. I don't know if this is true or not. Raquel? AUDIENCE: My native speaker is a speaker of Portuguese. And she said that, like, exactly. NORVIN W. RICHARDS: Yeah, yeah. That's the kind of thing Portuguese speakers say. There are all kinds of reasons that it could be true. Portuguese has undergone various sound changes that have eliminated sounds that are present in Spanish. So I can-- I don't know very much about either of these languages. But for example, the Spanish word for "hot" is "caliente." And the Portuguese word for that-- I'll spell it in IPA. It's something like [PORTUGUESE].. So that's Brazilian Portuguese for "hot." And what's happened is, first of all, the A has palatalized the T, so it's become a "chuh," and you've also lost the final E. So T has become "chuh." And the final E is gone. And the intervocalic L is gone. That's another thing that happened in the course of the evolution of Portuguese. So-- and then some other fancy things happened to the vowels. So we ended up from "caliente" to [PORTUGUESE].. Actually, Portuguese isn't descended from Spanish. These are both from Latin-- "calentem." And then like that, it became both of these forms. Spanish was much more conservative. And Portuguese has done a lot of things including getting rid of sounds that are still present in Spanish. And it's probably easier to ignore sounds that are there-- which is what Portuguese speakers have to do if they want to understand Spanish-- than it is to figure out what sounds would have been there if they were not gone, which is what Spanish speakers have to do when they're trying to understand Portuguese. That's one theory anyway about what's going on. Another is just that there are more situations in which Portuguese speakers need to understand Spanish than there are situations in which Spanish speakers need to understand Portuguese. Yeah. So that's the kind of thing that may-- I don't want to pick on Kateryna's proposal, which is a popular proposal. It's about mutual intelligibility. But there are places where the intelligibility is kind of one way, or where, in fact, people who don't understand-- don't have a common language can speak to each other through the magic of interpretive dance. Yeah. Yeah. That's a thing that happens. There are other bunch of examples here. There's a popular linguist's response to that question-- how do you know when you're looking at two languages as opposed to-- it's often phrased as, what two things count as two languages as opposed to two dialects of a single language? And there's a popular linguistic response to that which says, you should really try to stop asking that question. So here are two classic statements from linguists about that. One of them is by the great Yiddish scholar, Max Weinreich, who said-- he said it in Yiddish. "A shprakh iz a dyalekt mit an armey un flot," "A language is a dialect with an army and a navy." So what he meant was we don't draw distinctions like is this a language or a dialect on the basis of anything linguistic. It's not about how different they are linguistically. It's about political power, whether they are spoken in different countries or not. So lots of kinds of cases, including the Portuguese, Spanish example that we talked about before. Danish, Norwegian, and Swedish-- anybody speak any of those languages? So I'm not making this up. I promise. I was once on-- I was in Norway flying to a conference. I was flying from Southern Norway up to Northern Norway up to Tromso. I was sitting next to a Norwegian linguist. And we were flying Norwegian Air. And various people were making announcements over the-- so the captain came on and made an announcement in Norwegian. A plane full of Norwegians-- all of them are understanding. The copilot came on and made an announcement in Norwegian. And then somebody else came on. And I forget-- the purser or something came on and made an announcement. And the Norwegian linguist who was sitting next to me said, "He's speaking Danish." Because there's no reason not to, apparently. If you're speaking Danish to a bunch of Norwegians on Norwegian Air, ah, they'll figure it out. AUDIENCE: [LAUGHTER] NORVIN W. RICHARDS: So we call them Norwegian and Danish because they're spoken in Norway and Denmark. And they are different from each other. Actually, Danish is more different from Norwegian and Swedish than Norwegian and Swedish are from each other. Norwegian and Swedish are quite similar in lots of ways. Danish is a little bit different. But, as I say, close enough that on Norwegian Air, you can announce things in Danish. It probably helps that if you're announcing something on a plane, probably the passengers know what you're saying, usually, unless there's something unusual happening. There are examples of the other kind, too. I've got Mandarin and Cantonese up here. I encounter this a lot. I work on Tagalog sometimes, which is a language spoken in the Philippines. The Philippines is a country with many, many languages-- lots and lots of related languages. But they are very different from each other in lots of ways. So I speak Tagalog, which is the language spoken around Manila. If I'm listening to somebody speaking Cebuano, or Ilocano, or Hiligaynon, or any of a zillion other languages that are spoken in the Philippines, I cannot understand them. I mean, they're at least as different as Spanish and Italian. You can tell they're related but quite different from each other. But Filipinos are used to calling them dialects. And I've had arguments with Filipinos in which I will ask one who's from some area of the Philippines far away from Manila-- I'll say, "What language do you speak?" And they'll say, "Oh, I speak a dialect of Filipino." And I'll be like, "Which dialect?" And he'll say, "Cebuano," which is OK-- related to Tagalog but not mutually intelligible with it. There's something similar going on with Mandarin and Cantonese. We tend to call those dialects of Chinese because they're both spoken in China. But-- and they do use-- it's a beauty of the Chinese writing system that if you write them down, they look very similar. But if you are speaking Cantonese to someone who only understands Mandarin, then you will have a hard time getting anything across is my understanding. They're different enough that-- they have different tone systems. They have different phonologies. The Mandarin word for "I," the first-person singular pronoun, is [MANDARIN]. In Mandarin, it's something like [CANTONESE].. They're just different. They're different languages. Sorry. Yeah? AUDIENCE: Wait. Which one is [INAUDIBLE]? NORVIN W. RICHARDS: In Mandarin, I think it's [MANDARIN].. AUDIENCE: Yes. NORVIN W. RICHARDS: Yeah. And in Cantonese, I believe it's [CANTONESE]---- just different, different [INAUDIBLE].. AUDIENCE: [INAUDIBLE] [? vowels ?] [INAUDIBLE] NORVIN W. RICHARDS: Oh, it's-- AUDIENCE: [INAUDIBLE] NORVIN W. RICHARDS: Do you-- AUDIENCE: Yeah. NORVIN W. RICHARDS: What is it? AUDIENCE: [INAUDIBLE] NORVIN W. RICHARDS: [INAUDIBLE]. AUDIENCE: Like, the vowel is pretty similar. NORVIN W. RICHARDS: Oh, OK. AUDIENCE: But that first consonant-- NORVIN W. RICHARDS: The consonant is different. OK. Cool. Thanks. Yeah? AUDIENCE: I think it's interesting that you brought up Spanish and Italian. NORVIN W. RICHARDS: Yeah? AUDIENCE: Because one time I gave a story in Italian to my mom, and she understood it 100% without even trying. I was like, how do you know that? It's just like Spanish. NORVIN W. RICHARDS: Oh, yeah. Cool. So your mom is clearly linguistically gifted. AUDIENCE: [LAUGHS] NORVIN W. RICHARDS: I think if someone had-- you gave it to her written down? AUDIENCE: No. I just read it out loud. NORVIN W. RICHARDS: Oh, you read it out loud. Wow. So, wait, do you speak Italian? AUDIENCE: No. But I was just pronouncing-- NORVIN W. RICHARDS: You were pronouncing it out loud. Cool. No, that's awesome. Neat. I have had the opposite experience. I don't speak Spanish at all. I speak a little bit of Italian. And so I have attempted to speak in Italian to Spanish speakers, to which the response is always, "I don't speak English." AUDIENCE: [LAUGHTER] NORVIN W. RICHARDS: OK. Sorry. My Italian is not all that good. Maybe that was the problem. Yeah? AUDIENCE: So, aside from Mandarin and Cantonese [? being-- ?] most dialects in China, [INAUDIBLE] [? they ?] [? still use ?] the same writing systems. NORVIN W. RICHARDS: Yes. AUDIENCE: [INAUDIBLE] some sort of [INAUDIBLE] system. So [? could ?] I say that having almost the same [INAUDIBLE]---- does that count as [INAUDIBLE] or [INAUDIBLE] NORVIN W. RICHARDS: Well, so I wonder-- I mean, so Engl-- if I understood what you said correctly, there are a couple of different-- I don't know if this is what you're talking about. There are a couple of variations on the Chinese writing system. I've heard them called the traditional and the simplified characters. Is that what you're talking about, where you have slightly different ways of writing particular characters? Is that what you mean when you say Mandarin and Cantonese are using different-- AUDIENCE: No. It's just when they already know [INAUDIBLE] They're most definitely Chinese [? culture. ?] NORVIN W. RICHARDS: Yeah. AUDIENCE: But I can read them, actually. NORVIN W. RICHARDS: Oh, that's interesting. OK. Yeah. So I don't know why that would be. So look-- the different-- whatever we're going to call them within China because it's not just Mandarin and Cantonese. You're absolutely right. There are many of them. They're all related to each other. They're members of a language family of the Sino- Tibetan language family, it's sometimes called. And my understanding is that most of them have a lot in common syntactically, like the basic word order is the same. And they have [INAUDIBLE],, and they have a lot of properties in common. There's a lot of careful work on the syntax of, for example, Mandarin and Cantonese showing that they're not exactly the same. There are lots of interesting distinctions between them. So, for example, Mandarin will allow you-- Mandarin has what are called numeral classifiers. So if you want to count things, there's a morpheme that goes between the number and the thing that you are counting which tells you something about the nature of what you're counting. So if you want to say three books, there's a morpheme that goes between three and books that clarifies that it is a book you are talking about. And you'll use that for anything book-like. If you're talking about three pens, there's another classifier which is used for things that are long and thin, things like that. Mandarin and Cantonese both have those. Cantonese is unlike Mandarin in that it can use numeral classifiers even when there is no number. So it can just start a noun phrase with a numeral classifier. So there are those smallish differences between these approaches. Sorry. You're raising your hand. [INAUDIBLE] AUDIENCE: No. I just-- I was going to say something. Sorry. NORVIN W. RICHARDS: Yeah. AUDIENCE: Never mind. NORVIN W. RICHARDS: Oh, OK. There are those kinds of differences between Mandarin and Cantonese, but they have many properties in common. And I think this is probably true of a lot of the, again, dialects or languages-- whatever we're going to call them-- within China that, you know-- [INAUDIBLE] or [INAUDIBLE] or whatever. There are all of these different versions. They have lots of things in common. It's a property of the Chinese writing system that if you write a character down, it doesn't matter what language you're speaking. So for that matter, if I were to write English using Chinese characters, someone who didn't speak English but could read Chinese characters would probably be able to figure out what I had written down. So if I use this symbol-- this is the Chinese character for a person. I apologize. My handwriting is bad in every language. But it's something like this. That's got a pronunciation in Mandarin that's something like [MANDARIN]. If I decided to write English using Chinese characters, I would use this character for the word "person." And it would be your job to read it as "person." And you'd-- if you-- again, if you knew Chinese characters but couldn't-- didn't speak any English, you'd be able to read that. Not because you knew English, but because you knew these characters. And when Mandarin and Cantonese speakers-- if you have a monolingual Mandarin speaker and a monolingual Cantonese speaker, and they're both writing things down, if they do use the same characters, they'll be able to read-- each will be able to read what the other has written. But it's not-- that doesn't have anything to do with whether they speak the same language or not. It's a property of the Chinese writing system. You're writing down not sounds but words in a sense. Does that make sense? It's a very long, involved response. Yeah? OK. Yeah. So various examples-- Serbo-Croatian is another example. Down here you have to be careful-- there used to be a language that was called Serbo-Croatian before the breakup of the former Yugoslavia. There are now people who are very fierce about the fact that they are distinct languages-- Serbian and Croatian. They are, again, written with different writing systems-- actually, different alphabets. And there are differences in their grammars. But the fact that people are fierce about them being different languages has to do with where national borderlines have been drawn and other things about history. So a popular linguist's response to this question, how do you know when you're looking at two languages or two dialects of a single language, is to say-- I think I did this in class-- "mu!" Yeah. Don't ask that question! That question contains some kind of false presupposition. Chomsky in '96, in a book called Knowledge of Language, laid out a way of thinking about this that I have always thought made a lot of sense. He said, look, there are-- and his claim was there aren't actually such things as languages or dialects. What there are are people. So there are people. We can agree that there are people. And people have mental grammars-- all the stuff we've been talking about all semester, all the-- if we knew everything about what was going on in your brain as it interacted with language-- if we knew everything about your lexicon and the phonological and syntactic and semantic status of every representation that you were messing with, and we had a perfect description of everything that was going on in your head-- imagine that we had that-- we can call that your I-language, call it your individual language. So that's your mental state. And now, Chomsky said, it is a fact about the way that people tend to interact with each other, and the way that people grow up and learn to speak, and the way that people interact with each other as adults that we tend to end up growing up in domains where there are lots of people with pretty similar I-languages. And that's what gives us the illusion of this other kind of thing. He called it an E-language, an external language-- things like English, or Mandarin, or Japanese, or French, or whatever-- these objects that are abstractions over many people's I-languages. And it's OK to talk this way as a loose way of talking. This is the way we're used to talking about languages. But we should bear in mind, he was saying, that in the final analysis, these things don't exist. They're just generalizations over lots of different I-languages. So if we knew everything about the I-languages of-- let's stick to every native speaker of English in this room. If we looked at each of us in turn, we'd discover that we have lots and lots of things in common. But that there are small differences between them. I'm apparently the only one with positive "anymore," maybe Faith too. I and a couple of other people have "y'all" as a second person plural pronoun. So there are these little differences between pairs of us. And some of us are "soda" people. Some of us are "pop" people. Some of us are "Coke" people-- little differences between pairs of us. And that's all we really need to say. So what we have are a bunch of people who have pretty similar I-languages. And if we work too hard to try to figure out, well, which of those languages is really English and which of them are deviations from English, then we're, Chomsky was saying, asking ourselves the wrong question. Trying to decide whether two people speak the same language or not, it's like trying to decide on a fall day whether two leaves are the same color. So you could look at two leaves, and say, yeah, those are both red. But if you looked hard enough at them, you'd convince yourself they're not exactly the same. One of them is a little deeper. One of them's a little browner. So what's the right answer? Well, there isn't a right answer. It's just a question of how fine-grained do you want your analysis to be? That was Chomsky's way of talking about this. I bring this up because it short circuits a lot of ways people have had of talking about dialects. So I started off with this metaphor about chess and checkers and the teachers who pretended that there was no such game as chess. And I've already talked a little bit about ways in which this metaphor has correspondences in real life. So there are dialects-- negative concord dialects-- that are the subject of deliberate suppression by the education system. And this has consequences for people who grow up speaking these dialects. There are dialects that have been particularly attacked which go by various names-- sometimes called African American English. It's also been called African American Vernacular English. Decades ago, it was once called Ebonics. Does anyone still use this term? Has anyone heard the term Ebonics used for African American English? It was an attempt to make African American English sound different from African American English-- make it sound like something you might want to study. When people first-- so people have been talking about this version of English for a long time. And a lot of the discussion has been like the teachers that I was asking you to think about in the metaphor-- the ones who said, what are you doing? That's not how you play checkers. You're just moving the pieces around randomly. Stop it and learn to play a game with some rules. So the people who talk about African American English often make a point of saying that it's a version of English-- this is William Raspberry, who's African-American, I believe, saying it's a version of English that has no rules. There's a lot of really interesting reasons and not-so-recent research on African American English, some of it by African-American linguists. There's a really excellent series of works by a linguist, Lisa Green, who I recommend that you look up if you get interested in these topics. It has a number of properties, which because we're running out of time, we probably won't get a chance to talk about all of them. But I'll run through a couple of them. First of all, to talk about it as a monolithic entity is an oversimplification. There are a lot of varieties of African American English, just as there are of non-African American English. But it's characterized by negative concord, which is one of the reasons that negative concord is picked out for abuse. It has a richer tense and aspect system than the version of English that I'm speaking. So a version of the English that I'm speaking would use the same kind of verb to express these things. So I would say for both of these, "They are usually tired when they come home," and "They are tired right now." African American English distinguishes these. So you use "be" for the habitual one, the first one, and nothing-- so "They tired right now," for the second one. Again, is anybody familiar with this dialect of English? Is this something any of you have grown up around? Have you heard this? Have you heard people saying things like, "They tired right now?" Yeah. So I certainly have growing up. And, at the time, I believed and was taught to believe that this was because these speakers were, you know, they were lazy. They were leaving out words that they should have said. What they were actually doing was expressing an aspectual distinction that my version of English doesn't express. This is a distinction that's also expressed in Spanish. It's a distinction between what's sometimes called stage level and individual level properties. So individual level properties are properties that hold of someone over a long period of time, so use "ser" in Spanish for-- to express individual level properties like "I'm North American." "Estar" is used for temporary properties like being tired. That's a distinction that my version of English doesn't make, but African American English does. It's also got, famously, a rule-governed version of copula drop. So I do want to talk about this in a little bit of detail. So in the context in which you can leave out the copula-- when the aspectual distinctions allow you to use the null copula in African American English, there are rules about where you can do it. So, for example, you can do it in a sentence like "He rich." But you can't do it at the end of a sentence. So you can't ask-- you can't say things like, "I don't know how rich he," in this version of English. You also can't do it in infinitives. And you can't do it in the past tense. So the past tense is "He was rich." It's not, "He rich." And you don't do it when there's negation. So there is this phenomenon of copula drop if you're describing a temporary property. But it's copula drop that's subject to these rules which I've just gone through real fast. It's not just say whatever you want. There are these restrictions which linguists have studied and figured out. These particular distinctions are of interest because they have a correspondent in the version of English that I am speaking, which is sometimes called standard English. So standard English has a phenomenon where we can contract various kinds of things including the copula. So you can say, "He is rich." But you can also say, "He's rich." And this is also subject to restrictions. So you can't, for example, do it at the end of a sentence. You can't say-- you can say, "I don't know how rich he is." You can't say, "I don't know how rich he's." And I'm about to go through all of the context that we did on the last slide, but what we're going to see is that what African American English is doing is dropping copulas under just the circumstances where my version of English-- the version of English that I'm speaking-- can contract copulas. And that's it. So they're not being lazy or stupid. They're not failing to say a word that they ought to say. They just have a version of a contraction that's a little more emphatic than the version that I use. They don't just get rid of the vowel. They get rid of the whole thing. But the restrictions on where you can do this are the same restrictions. So just like in my version of English, you can't say "He'sn't rich," doing contraction both of the copula and of negation, similarly, you can't do copula drop there in African American English. Did anybody get a chance to look at the optional reading for today? I put up an optional reading. And as I put up the optional reading, I thought, here I am putting up an optional reading for a one-day topic on the day when a paper is due. What are the odds that anyone will have a chance to even consider thinking about actually even reading the announcement that there is an optional reading? These people-- it's the end of the semester. The optional reading is still on the website. It's worth having a look at sometime when you have time. I can just describe for you what it is. It's an African-American author who is addressing arguments about how African American English ought to be treated in schools. And he has a very sophisticated set of arguments. He's replying to various people who have argued the conservative position, which is that it's fine for people to speak African American English in the privacy of their own homes, but it is important that we teach them standard English in schools in order for them to get ahead in life. He's taking issue with these arguments. But the cool thing about the essay is not just that he takes issue with these arguments, but that it's written in African American English. So the essay is written in the version of English that he is describing. And it's a cool and difficult experience attempting to read it. So if it's not your native dialect of English, it's not trivial figuring out how to read the article. So it's an interesting experience. I recommend it to everybody. Questions about any of this? Other things people want to talk about? So the goal-- and we are now in the part of the semester where I attempt as a public service to combat various things people believe about language or maybe have heard about language. So the goal of today was to get you to at least hear the idea that when people speak English or whatever your native language is in a way that's different from the way that you speak it, it's not because they're stupid. It's because languages do that. There are varieties of languages. Maybe it's because there aren't actually such things as languages. There are just people. And people are different from each other. Yeah? All right. Go forth and frolic. Thank you for listening to me talk about this. And I will see you again on Thursday. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_25_Language_Acquisition.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: Early on in this class, I think, I gave you examples-- it might have exactly been these examples. I meant to look back at the old slides to see. But it was something with this character anyway. I invited you to tell me-- if I ask you a question like "Who did Biden ask to see him?" I asked you, who's "him"? And I think, at the time-- who knows, we could do this exercise again and find out whether your minds have been warped by a semester of 24.900. I think, at the time, what you said was that "him" could refer to Biden. It was possible for this to mean "who"'s the person such that Biden asked that person to come to his, that is Biden's, office and see him, right? It could mean that. And then so it could also refer to someone else, of course. But it could refer to Biden. We weren't doing this at the time, at the beginning of the class. But we can represent that with these subscripts, since we've since talked about binding theory. This is the kind of fact we're in a position to talk about. And then what we also did at the beginning of class was to think also about questions like "Why did Biden ask to see him?" And what you all agreed at the time-- and feel free to jump in and stop me if you-- oh, Faith's about to stop me. AUDIENCE: I was just going to be like, I feel like now with everything we've learned about wh-movement, it feels wrong because you're moving something out of an island, I think. But you're keeping it there at the same time. Like "him," I feel like it was more appropriate to omit "him" because you're moving the "who" anyway. NORVIN RICHARDS: Oh, the first one? "Who did Biden ask to see him?" Oh, so in the first one, "Who did Biden ask to see him?" is supposed-- the answer is supposed to be the-- whatever, the head of the Joint Chiefs of Staff. He asked the head of the Joint Chiefs of Staff to come and see him, Biden. So Biden's worried about something that he wants that guy's advice on. And so he asked him to come see him. I think that's an OK question that you can ask that way. Yes, so you have been warped, you specifically, by a semester of 24.900. You're starting to fear islands everywhere, which is good. It's an important part of your training as a syntactician, yeah. Cool. So the first one then, "him" can refer to Biden, but it doesn't have to. For the second one, I think what people said at the time was that it wasn't possible for "him" to refer to Biden anymore. So "Why did Biden ask to see him?" That has to be, "Why did Biden ask to see some person, some particular person, but not Biden?" Do people still have that feeling? It's been a whole semester of suffering through this stuff. No? Yes, many of you do. Some of you, it's too early in the morning to have feelings about binding theory. You don't have to have feelings exactly. I'll settle for opinions. So I think there's this contrast between these sentences. And you know what I said at the beginning of class was look, these sentences, they look pretty darn similar-- they only vary in, like, one vowel, right? It's the difference between "who" and "why." And yet, you make that small change to the first word in the sentence and the last word in the sentence changes who it can refer to. Pretty complicated array of facts. And I think what I also promised, incautiously, was that by the end of class, you would understand why it worked this way. And so I just wanted to stop and talk about why it worked this way, partly as a way of segueing into what we're going to talk about today, which is language acquisition. So why does it work this way? So first, as Faith just said, one of the things we know about the first example is that "who" is moving, it's starting off as the object of "ask." So "Biden asked the chairman to see him." "The chairman" is the object of "ask." Instead of "the chairman," you've got "who." And "who" is undergoing wh-movement. "Why" is presumably also undergoing wh-movement. Let's not worry about where it's coming from. And the other thing that maybe you're in a position to believe by now-- we talked about this when we were doing binding theory-- was that these infinitival clauses, although they don't appear to have subjects, that they actually do have subjects. Their subjects are just null. We were calling it "PRO." It's this null pronoun that refers back, in both cases, to something in the higher clause. So in the first example, "Who did Biden ask to see him?" We consider the non-wh-moved version of this-- "Biden asked Harris PRO to see him." Here, PRO refers to Harris, right? This means Biden asked Harris to do the following thing. He wanted Harris to come see him. Does that sound right? That's what that means. So that PRO is an instance of what we called object control. PRO is referring back to an object in the higher clause. Yeah, Joseph? AUDIENCE: Sorry. NORVIN RICHARDS: It's OK. AUDIENCE: PRO in to satisfy the EPP. NORVIN RICHARDS: Yeah, that's right. That's what it was for. It satisfies the EPP. C needs a subject and doesn't appear to have one. And the claim back then was it really does have one. We went through this idea very quickly. One of the things we discovered about it was that it made the statement of binding theory simpler. It made it easier to understand why pronouns and anaphores inside infinitives act the way they do. And this is, in a way, an instance of that. So this PRO is referring back-- I'll give it the same subscript that it's got up there. If we change "Harris" to "who" and move it-- "Who did Biden ask to see him?" This PRO is controlled-- we say it refers back to the object of the higher clause. In the second example, the higher verb, "ask," doesn't have an object. It doesn't have to have an object. So you can say "Biden asks Harris to see him." Can also say "Biden asked--" this is with no object-- "to see him." And then the PRO gets controlled by the subject. So "ask," when it has an object, it's what's called an object control verb. The object of "ask" controls PRO. When it doesn't have an object, it's the subject. I think I said when we very briefly talked about this, it sure would be nice to understand why a verb-- what it is that conditions, which thing controls PRO. That's a topic of serious research. People try to figure out what the rules for that are. And it's not simple. So yeah, this is a way of representing what's going on in these sentences. Those embedded clauses have a subject, a subject you can't see, that refers back either to the object of "ask," in the first example, or to the subject of "ask," Biden, in the second example. This has been review, sort of. Does this make sense? Do people sort of remember this? No? And this, finally, allows us to understand why the pronoun at the end is behaving the way it is. So there's this general principle, principle B, that says pronouns have to be free in a certain domain. That is, they have to not have a binder, not have something that c-commands them and corefers with them. That's too close. We have to figure out what "too close" means. We can account for these data if we decide that the domain in which the pronoun has to be free is the domain that I've got brackets around, the embedded clause. So it has to be free in that clause. I don't care. Go away. No. It has to be free in that clause. What does that mean? Well, in the first example, it just means that "him" has to not refer to Harris. Well, there are lots of good reasons for it not to refer to Harris. I should have used a male name. It can't refer to the object of the higher clause because that object is controlling this pro. So if we kept Harris, if we said "Biden asked Harris to see him," then "him," of course, couldn't refer to Harris. But if you said "Biden asked Harris to see her," then "her" still couldn't refer to Harris-- background knowledge for anybody who has forgotten this Harris is female. "Her" still couldn't refer to Harris because "her" is too close to PRO would be the story, yeah. So that "him" is capable of referring to Biden because "Biden" is outside the bracketed domain. What about the "him" in the second example? Well, "Biden" is outside the bracketed domain in that example, too. But there is a PRO in the bracketed domain, the embedded clause, that is controlled by "Biden" and refers to Biden. So "him" cannot refer to Biden because there's-- not because "Biden" is too close, but because PRO is too close. PRO is inside the bracketed domain. And it binds him. It's too close. Yeah? People remember principle B? Principle B was meant to account for the fact that if I say "Biden thinks Harris likes him," that "him" can refer to Biden. "Biden" is far enough away. Whereas if I say "Harris thinks Biden likes him," now "him" cannot refer to Biden. "Biden" is too close. So we have a principle B that says "him"-- pronouns generally-- can't refer to a c-commanding noun phrase that's too close. We have to define "too close." But it's something like inside the same clause. So here, "Biden" is far enough away for "him" to refer to Biden. It doesn't have to. It could be somebody else. Here, "Biden" is too close and "him" can't refer to it. And something similar is going on in the slide. So this is me making good on my promise at the beginning of the semester. Beginning the semester, I said, by the end of the semester, you'll understand why this is. So now, you do, right? Do you have questions about this? Does this make sense? Well, so now we understand this-- yeah, yeah, I did all that. Now, we understand all this. But you know, all of you are exceptionally smart, right? And here you are getting an expensive university education, where someone is explaining this to you in great detail. And so that's why you understand what's going on here. But there's another sense in which you understood something about what was going on here. You had these intuitions. You knew what those pronouns could refer to before you ever set foot in this class. And people who don't have your advantages, people who are not getting expensive university educations, people who are not as smart as you, they also have those judgments. You can wander around Boston, ask any native speaker of English, they'll agree about these sentences, about who "him" refers to, yeah? So question-- how did that happen? How do you end up with this, in the sense, exceptionally complicated knowledge? How are you doing all of this? This is a subpart of what Chomsky likes to call Plato's problem. Plato, I guess, in his writings somewhere asked this question-- how is it that-- he wasn't talking about language specifically. How is it that we know so much about the world, despite the fact that our evidence is kind of random and spotty? It's not as if your parents ever sat you down and explained, "Oh by the way, if I say 'Biden asked to see him,' then 'him' can't refer to Biden." Nobody ever explained that to you. You just kind of figured it out, despite the lack of any relevant instruction-- not just this fact, but a whole bunch of complicated facts, all the complicated stuff we've been talking about in this class. Your parents never explained that to you, I assume. Who knows? Maybe there's somebody in this class whose parents were exceptionally helpful about this stuff. But presumably, not all of you had your parents explain this to you. People who have studied the interactions of parents with children, it's pretty clear that parents very occasionally give children direct instruction about how to speak. But they usually have other kinds of priorities. So this is an actual dialogue between a child and the mother. The child says, "Mama isn't a boy, he a girl." The mother says, "That's right." The child says, and "Walt Disney comes on Tuesday." The mother says, "No, he does not." So this is an actual dialogue between a small child and a mother. And something that you notice about it is that if the child is relying on the parent to get explicit instruction about grammar, then the child is out of luck, right? So the child's first sentence has some grammatical problems. The child is using "he" to refer to-- to refer to the mother. The response that the mother gets-- the child gets from the mother for that is "That's right," right? Because the mother is not thinking too much about the grammar. The child has made an accurate observation about the world, right? We can worry later about pronouns. The point is the child is figuring something out about the gender of their mother. And then the child says this perfectly grammatical sentence at the end-- "And Walt Disney comes on Tuesday." And the mother says, "No, you're wrong," right? Well, OK, so apparently, the mother is not paying too much attention to stuff like grammar. The mother is paying attention to accuracy about the world. And this child is therefore going to grow up to be a normal and well-adjusted human being because that's what you want from your parents. It would be odd for the mother to say "No,, no, you're using the masculine pronoun for me. And that's not correct." There actually are dialogues-- if you've ever been around a small child, you know when parents do attempt to instruct their children in how to speak, it often-- it's not clear that the children are paying any attention. So here-- again, I swear I am not making this up. Somebody recorded this. A child said, "Nobody don't like me." And the father, who should really have had his fatherhood license taken away, said, "No, say 'Nobody likes me.'" This dialogue repeats six times on the tape. And then finally, the child says, "Oh, nobody don't likes me." And curtain. I don't know what happened after that. I mean, this particular dialogue is hopefully not one you'll often hear, but you do. There's another classic that somebody recorded. A child said, "Give me other one spoon. I want other one spoon," right? And the parent tries many times to get them to say "the other spoon." "No, say 'the other spoon.'" "Other one spoon." "Say 'the other spoon.'" "Other one spoon." "Say 'the other'"-- "No, other one spoon." They go back and forth like this. And the parent says one word at a time-- "the, the, other, other, spoon, spoon." "Yeah, the other spoon," says the child. "Now give me other one spoon?" asks the child. So parents do-- I have been a parent. If you've ever been around children-- so I can attest that this is how it works. Even if you are trying to tell children how to speak, they're not listening to you. And most of the time-- the first dialogue illustrates this-- the parents quite rightly are not really paying too much attention to the grammar of their children's utterances. They're just excited by the fact that their children are talking. So how are we learning all this complicated stuff? It's not because your parents teach it to you. Even when they try, you don't pay attention, apparently, at the relevant stage. And there's an answer that I've been pushing in this class. It's this idea of universal grammar, which is that human beings can't help building our languages in certain ways. And so you, of course, need some data in order to build your language because there are 6,000 some languages out there. So you need to learn which of those languages it is that you're learning. But languages come with certain properties sort of factory preset. You don't actually have to learn all this stuff about controlled PRO, and condition B, and so on. Universal languages just work that way. It's just a matter of learning which of the words in your language means "ask," and which one means "see," and which one means "him," and so on. And once you know those things, then you know that your language will work this way because you can't help it. That's the way languages work. So we've spent the whole semester trying to figure out exactly what it is that you know when you know your native language. And I hope I've convinced you that it's more complicated than you might have thought. You know quite a lot, lots of complicated stuff going on in your head. And so a question arises. What's innate? What is this stuff that's preset? And what is learned? And as I've just said, it's obvious that not everything is innate because there's more than one language out there. You've got to learn which string of sounds means "ask," which string of sounds means "cat," which string of sounds means "spoon," and so on. There's some evidence also that evidence matters, which comes from the idea of what's called a critical period. It looks as though-- it's very hard to get direct evidence about this, but it looks as though there may be a period in the life of a child when they're especially well-designed to acquire language. And after that period is over, they get worse at it. There's a lot of anecdotal evidence for this. All of us fluently learn one or more languages as a child. When you're a small child, you learn the languages that are around you. For me, it was one language. There are lots of people who learn more than one language growing up. But anecdotally, for most of us, once you're an adult, it's much harder to fluently pick up another language just by being exposed to it. Even if you imagine being surrounded by people who are willing to just talk to you all the time in another language, it's not clear that you would learn it in a matter of six years or seven years, which is what you did with your first language. There are some cases that bear directly on this hypothesis. They're sort of awful cases, but I feel the need to tell you about them. One of them is the story of a child who was given the pseudonym Genie. Genie was discovered, I think, in LA in 1970. She was the child of a mother who was almost blind. Her mother had a degenerative disease of her eyes. She was gradually going blind. And by the time Genie was discovered, she was almost completely blind. And her father was crazy, so probably paranoid schizophrenic. He pathologically hated noise. He couldn't stand noise. And so if you've ever been around small children, you know that they sometimes make noise. So he was-- I mean, talk about people who should have had their license to be a father taken away. Genie had one older sibling who died of exposure because they were kept in the garage of the house because they were noisy and the father couldn't take it. She had another older brother who did survive, I don't know how. Maybe her father wasn't quite as crazy when her older brother was born. But when Genie was born, they had learned not to keep the babies in the garage anymore. What they did was to keep her in a dark room strapped to a chair and to punish her for making sounds. So no one ever spoke to her. She apparently could sometimes hear piano music from a nearby house. So when she grew up, she always loved piano music. Her father and her older brother would sometimes bark at her like dogs. So she was always afraid of dogs. That went on for the first 13 years of her life, after which she was discovered and taken away from her parents. She was discovered because her mother somehow-- who as I said, was almost blind-- had gotten out of the house with Genie and was trying to find a government office where somebody could help her-- not with Genie, but with something having to do with their blindness. She was looking for an office that would help her with that and wandered by accident into an office that was about child welfare. And they immediately spotted that there was something off about Genie. So Genie was taken away from her parents-- sorry-- and put in a series of institutions, where lots of people did their best to help her recover from the first 13 years of her life. And she did, in a sense, recover, in the sense that she acquired the ability to stop attacking people when she saw them. And she was wild when she was first out of the house. And she learned to say some words. But she never learned to speak, in the sense of putting words together in anything that resembled grammar. So she was just saying words. So she knew words for things that she wanted. And insofar as it was possible to do intelligence tests on somebody who was as profoundly traumatized as she was, she scored as a perfectly intelligent person. She had a normal mind in all kinds of ways. But she never learned to speak. So that's one case that people say, ah, so if you deprive a child of linguistic information for the beginning of their life, then this has profound consequences for their ability to learn to speak. And it could be that that's what this shows. It could also be that if you terribly traumatize someone for the first 13 years of your life, that you damage their ability to learn to speak. As an example of the critical period, it lacks a certain scientific rigor, it seems to me. So I apologize for telling you this story. I feel as though anyone who takes intro linguistics should learn about Genie. Though, as I say, Genie is a pseudonym. I have no idea what her name is. There are better stories than that. Chelsea, also a pseudonym-- and actually, Chelsea's case is a case that used to be common-- not common-- used to be more common than it is now. There are safeguards in place now to stop this from happening as often. Chelsea is another girl who grew up in Northern California-- I don't know what it is about Northern California-- whose parents believed that she was profoundly developmentally disabled. This is not a story like the Genie story. So her parents were perfectly loving, caring people. They were doing their best to raise her. They treated her very well. She had a happy life. But she wasn't-- she was developmentally behind on all kinds of milestones. In particular, she wasn't learning to speak. And so her parents just did their best to deal with that. At some point-- I want to say when she was 12 or 13-- it was discovered that she wasn't developmentally disabled at all. She was deaf, that she was profoundly deaf. And her parents-- I mean, again, perfectly kind, loving, sort of unobservant people had never noticed this, I guess. So the fact that she wasn't speaking, it wasn't because she was developmentally disabled. It was because she hadn't heard any speech. And nobody had been signing to her because they didn't realize that she was deaf. So she hadn't been exposed to language in any form either. And as I say, it's a happier story than the Genie story because there was no actual abuse involved. And as I also said, this is something that happens much less common these days. So all of you hopefully were screened for deafness in schools. And this is one of the reasons that this happens, is to stop this kind of thing from happening, stop people from slipping through the cracks in this way. Chelsea, like Genie, wasn't exposed to language for the first 12 years or so of her life. And Chelsea, like Genie, never really learned to speak. So once they gave her cochlear implants and she was able to hear people-- again, she learned some words. But she didn't learn how to put words together into sentences. So she learned words like "banana," and "cookie," and so on. She knew how to say those words when those were the things that she wanted or she wanted to comment on the world. But she never got past that stage. So those are two cases-- one of them maybe more convincing than the other-- that lead people to think that there might be something like a critical period. It might really be true that you're especially good at learning language in your first seven or eight years of your life. There are cases in the opposite kind. If you're in the linguistics business, you always hear about these hyperpolyglots. They're people who somehow never seem to exit their critical period. I had a professor like this, Ken Hale, who was here when I was a grad student here, somebody who seemed to just be able to learn languages by being in the same room with the speaker for a little while. He had this amazing ability to pick up languages. There are lots of Ken Hale stories, including a story about him. So he spent a big chunk of the '60s wandering around Australia, going from Aboriginal community to Aboriginal community, learning the local languages, and studying them, and writing grammars, and so on. There's at least one community that no one has been able to work with since because he was the first person who went there, and so he went and he was there for like a week, and he learned to speak the language fluently, and then he left. And then the next group of linguists came to try to work with these people. And the people were like, "You people are all so slow. Why does it take you so long to learn? We've been telling you over and over again these words." And linguists were like, "Well, it takes a while to learn the language." And they're like "No, no, no, the last guy who came through here-- it was very easy. I don't know what your problem is." Did you have a question? AUDIENCE: Yeah, is there any evidence of actual physical cerebral damage or atrophy in these cases? NORVIN RICHARDS: So for Genie, I don't know whether the relevant scans have been done. I don't know. For people like Chelsea, I also don't know. But it would be interesting to find out. Yeah, yeah. Genie-- I mean, there are all kinds of sad things about the story of Genie. After she was taken from her parents, she lived in a series of institutions. And then her mother-- her father committed suicide pretty soon after she was taken away. And her mother-- eventually, I think her mother got custody of her again for a while. And then she was back in the institutions for a while. So she lived for a while with some linguists who wrote books about her, studied her, tried to help her. And then her mother successfully convinced the court that she would be better off living with her mother. So they did that for a while. So there was a period there where she was hidden from the eyes of linguists after this initial period, where she was being intensely studied. OK, all right, so it's possible anyway that there's a stage that children go through where they're especially good at acquiring language, a stage that most people then exit around six or seven years of age, though there might be people like my old Professor Ken Hale for whom it just never ends. And then lots of questions about how this knowledge works-- how do you acquire it? And we've been spending the semester talking about the nature of this knowledge. There are people who specialize in what's called language acquisition. This is the study of what kids know and when they know it. And so I wanted to tell you a few of the results from that subdomain. There is an undergraduate class that specializes in language acquisition. I forget what its number is. But if this is a series of topics that interests you, we have people who can teach you about this. There is a frequent discovery that the people who do language acquisition make, which is that children are smarter than you might think. So they know stuff that it's not obvious that they know just by watching them. Lots of cases where children are making what we may as well call an error-- that is, they're doing something that adults don't do. But if you look harder, you can show that they actually do have a representation, some knowledge that resembles what adults do. I'll show you a phonological example first. Here are some data from a Japanese child. This is a standard representation of children's ages. So this Japanese child, at the time that these observations were made, was three years and two months old. That's what that 3:2 thing means. So observation about this Japanese child-- this Japanese child didn't have the velar stop that Japanese has. So they were changing the velar stops to alveolar stops-- to "t"s, right? So mikaN, which is an orange-- it's not exactly an orange, it's another kind of citrus fruit-- is "mitaN" for this child. Or "poketto," which is a borrowing from English "pocket," is "potetto." Or "neko," which is "cat," becomes "neto." Yeah, so this child is changing "k" to "t." They also have "t." So Japanese words that have "t" in them, like "tama," which is "ball," or "terebi," which is "television"-- it's another borrowing-- it's a clipped borrowing. They do that a lot. They have "t" in those, too. So this is a child who has changed "k" to "t" and also has "t." Now, here's the thing about "t" in Japanese. "t" in Japanese undergoes some phonological changes, depending on the following vowel. So if "t" is followed, for example, by the vowel "a," it's just a "t." So "doesn't wait" is "matanai." But if you put that verb in the positive form, then the vowel that follows the "t" won't be an "a," it'll be an "u"-- the high back unrounded vowel. And before "u," "t" changes to "tsu." That's just a general rule. If you ever learn Japanese, you'll see it's encoded in their writing system, in the kana, that a "t" before an "u" is a "tsu." So "waits," the positive version of "matanai," is "matsu." The "t" changes to a "tsu." And if "t" is before the vowel "i," the high front vowel, it [INAUDIBLE] to a "ch." So the word for "city," for example, is "matchi," yeah, or the formal version of "waits" is "matchimas," so you add "imas" to "mat-," that stem "mat-," which means "wait." You get the "t" changing to a "ch." OK, so basic Japanese phonology. Now, here's the thing. This child had all of that basic Japanese phonology. So that child was saying, at three years and two months, "matanai," "matsu," and "matchi." But this child was also saying, for autumn, "ati." That is, the child was changing "k" to "t." And the child changing "k" to "t"- so the adult word for autumn is "aki." And the adult word for "bear" is "kuma." And this child knows that "t" becomes "ch" before "i," right? But autumn, for this child, is not "achi." It's "ati," right? And "bear" for this child is not "tsuma," where "t" normally becomes "tsu" before "u" like it does in "wait." For this child, the word for "bear" is not "tsuma." It's "tuma," yeah? If we were doing this in terms of rule ordering, I guess we could say, yeah, this is a child for whom there is a distinction between "t" and "k." And then there's a set of rules. There are the rules that change "t" to "tsu" or "ch" before certain vowels, right? And then after that, there's another rule that changes "k" to "t." Your first thought, looking at the first set I did, "orange," and "pocket," and "cat" would be, oh, how cute this child can't say "k," and they say "t" instead. And that's right in a sense, right? You might have thought then-- this is a child who-- their Japanese doesn't have as many sounds in it as adult Japanese. And that's also right, in a sense. But there's another sense in which it's not right. The child does distinguish "t" and "k," right? It's just that they pronounce them both "t." So the "t" that's really a "t" is the one that undergoes these sound changes before vowels. And the "t" that comes from "k" is just always a "t." It doesn't matter what the following vowel is. Yeah, so there's some sense in which this child has a full adult representation. You have to imagine them having "aki" in their head, right? And it's just that there is some rule that goes between their head and their lips that changes "k" to "t" all the time, no matter what the following vowel is. There's something like that going on. There's a small instance of a general pattern. What you find is kids are doing something wrong. They're doing something that's not what adults do. But if you look very closely, you can convince yourself that they know more than they seem to. Yeah, so just to remind you of stuff that we talked about earlier on-- so what I just showed you-- and I'll show you some more examples like this in a second-- involve just making lots of recordings of children speaking naturally, and then studying them. And that is one kind of thing that acquisitionists do. There are other kinds of work in which you run experiments on children. We talked about some of this kind of work early on in the class. So we're talking about selective looking tasks and sucking rate tasks. So sucking rate tasks make use of the fact that if you give a very small infant something to suck on-- first of all, they will. Sucking on things is one of their favorite things to do at that age. And it turns out that if you give them something to suck on, they will suck on it at first fairly rapidly. And then they will start to slow down. And how quickly-- they say you can give them something to suck on to which you have attached a clever device that will record the rate at which they suck, pay attention to how often they're going-- they're making sucking motions. And the observation is that if you give a child something interesting to look at, if they've started to slow down, and then you show them an interesting picture, they'll speed back up again. They perk up. And language acquisition is to make use of this. So if you want to know-- so here's the kind of experiment people have done. They've shown that Japanese infants-- so very small Japanese children-- can distinguish an "r" sound from an "l" sound. That is, if you play "rah" or "lah" at them, that they can distinguish these from each other. This is interesting because adult Japanese speakers can't do that. Japanese doesn't have a distinction between "r" and "l." They only have one liquid. So it's a famous fact about Japanese speakers. It's hard for them to distinguish "r" from "l." But very small Japanese children-- so children who are acquiring Japanese-- can distinguish "r" from "l." They know the difference between "rah" and "lah." And the way you show that is with this sucking rate task. So you give the child something to suck on. And when they start to slow down, you start playing "rah, rah, rah, rah" at them. And they speed up at first because, hey, here's a new thing. It's a recording. And then eventually, they slow down. They're like "rah," yes, I get it. Thank you very much. And then you switch to "lah." So you go "rah, rah, rah," and they start to slow down. Then you go "lah, lah, lah," and they perk back up again, showing that they can distinguish the "rah" from the "lah," that these are different things as far as they're concerned. They've done similar kinds of tests with English-acquiring children, showing that they can distinguish aspirated from unaspirated stops. So cast your minds back to the beginning of the semester. We were talking about the fact that in a language like Hindi, for example, there's a difference between "pah" [ASPIRATED] and "pah" [UNASPIRATED].. So English has voiced stops and voiceless stops. We have a distinction between "pah" and "bah." So there's an aspirated voiceless stop. And there's a voiced stop. And then Hindi has a three-way distinction between "pah" [ASPIRATED] and "pah" [UNASPIRATED] and "bah." And so "pahl" and "pahl" are two different words in Hindi. I think one of them means "knife" and the other means "flour." And I can't remember which is which. Anybody here speak Hindi? OK, those are two different words. So Hindi has these two distinct sounds that are different. And an observation that people have made with very small infants who are acquiring English is that if you do the sucking rate task with them and you play "pah, pah, pah" at them, that they eventually start to slow down. And if you then switch to "pah, pah, pah," the unaspirated stop, they're like, oh yeah, that's a different thing. So they recognize that as a different sound. One of the things that they can show-- actually, because the beauty of aspiration is it's the kind of thing that comes in degrees that with-- that you have the release of the closure, and then you have the beginning of the vowel. And you can electronically manipulate exactly how many milliseconds it takes for the vowel to start. And there is a particular cutoff. So if you give children an aspirated stop and an unaspirated stop, they can distinguish those. If you give them two stops that differ in aspiration by just a little bit, then whether they can distinguish those two stops from each other depends on whether one of them is on one side and the other on the other side of a particular dividing line. I forget what it is, like 50 milliseconds or something like that. There's some particular amount of time that is the cutoff for distinguishing sounds from each other. For being able to speak adult English, this knowledge is useless. But the theory is that all babies-- Japanese babies are all ready to distinguish "l" and "r." And the English speaking babies are all ready to distinguish aspirated and unaspirated stops. The babies are just born able to distinguish all of the kinds of sounds that they might need. And then as they grow up and start learning the actual language that they're learning, they forget stuff. Anything that isn't useful they get rid of. So Japanese babies stop carefully distinguishing "rah" from "lah" because their parents don't. That's one of the observations people have made about this stuff. So I want to get on to the next thing. So I don't want to spend too much time on this. So we've spent some time talking about stuff that suggests that babies have just a spooky amount of knowledge in advance, before they learn anything at all. And in fact, in the particular case of acquiring phonology, some of what they're doing is forgetting stuff that they come into the world knowing. That seems to be the way they're set up. There is stuff that children clearly have to compute. And there's some beautiful work on how exactly children segment the speech stream. So when we write things in English, we write with spaces in between words. But those spaces don't correspond to anything in the phonological signal, right? If you look at a spectrogram of somebody speaking, it's just a long string of sound. You don't put spaces in between your words, unless you're speaking extremely cautiously. And in fact, children make mistakes in speech segmentation. So a classic example is of a child who's just been told to behave and responds, "I am being hayve." So this is a child who thinks that "behave" is like "be" plus some adjective, right, "hayve"? "Be as hayve as you can," right? The child is doing their best, right? Or the other classic like this is a child who's just been told "We're going to go to Miami" and responds "I don't want to go to your Ami," which is a similar kind of mistake, is thinking that "Miami" is two words, a perfectly plausible thing. There's all kinds of really interesting work on how exactly children segment the speech stream. And there's lots of evidence that they use statistics about phonotactics-- that is, they make observations like if I see this pair of sounds next to each other, it's because-- so in English, if I have an expression-- if I say a pair of words like "lies beneath" the water, let's say-- so here, I've got a word ending in a "z" and another word beginning with a "b," yeah? So if you're an English speaking child, even before you know any words, the observation is-- it's as though they look at pairs like that and they go, oh, I don't ever seem to hear speech streams beginning with "zb" in English. So "zb" isn't something that an English word can begin with. And so maybe this is a word boundary. And there's this experiment that Saffran et al. did in '96 showing that very young children are very good at-- this was, again, using the sucking task, I think. Maybe it was the selective looking task, a similar kind of task, showing the children are very good at making observations about which pairs of sounds occur next to each other frequently in a speech stream. So in their particular case, what they were doing was exposing small children to streams of syllables. So they made up words. I'm going to make up words, too. So this is not the actual words they had, but they had the shape. They were CV, CV, CV words. So it was like "to, la, vi," and then "mu, ga, pi," whatever. And they made up strings of words. Here, I'll make up one more-- "ri, na, vi." And the observation was-- so they made up speech streams for the children that involved these words. So the speech streams would sound like "to la bi, mu ga bi, re na vi, re na vi, mu ga pi, re na vi, re a vi, mu ga pi, to la bi," without intonation. Just these syllables one after another. And the children basically very quickly got to where they knew that if you hear a "to," you're going to hear a "la" next. Or if you hear a "ga," you're going to hear a "pi" next. So they got to where they could make those predictions. And what that meant was that they knew that if you heard a "pi," you didn't know what you were going to hear next. There were a variety of things that you could hear next. So they learned these transitional probabilities very quickly. And it was remarkable. Again, babies are smarter than they look. This is one of the big results of language acquisition research. So children are good at this. There's a lot of work since '96 on exactly what it is they're good at. So for one thing, the original work involved words, made up words that were all three syllables long. And it was shown that if you'd vary the number of syllables, the babies get less good at it. It was also shown that they get better at it if you give them prosodic cues, like the words are all stressed on the first syllable, that that helps figure out where the word boundaries are. So however they're doing it, the upshot of it all is that children-- they speak their first words around the end of their first year of life. And then by the time they're about six years old, for English speaking kids, the estimate is that they know about 13,000 words. So there's this period where they're learning about six or seven words every day, which is astonishing. I mean, if you imagine trying to learn six or seven words every day, go study some foreign language and try to learn six or seven words a day. That's a really fast clip. But if you can do it for six years, then you will speak that language really well, whatever it is. This has all been about phonology. There's a lot of work on the acquisition of syntax. And again, this is sort of like the Japanese child phonology that I was showing you at the very beginning here. A similar kind of case-- there are lots of cases of children doing things that adults don't do. And yet, in the course of doing that, demonstrating that they know more than you might have expected them to. So children in many languages go through what's called a root infinitive stage. That is, they use verb forms that are infinitives-- the kind of verb that you would use in an infinitival clause. Here are some examples from Danish and from Dutch and from French. This is not universal. There are languages that don't use optional root infinitives. But many languages do. English, arguably, has it. Of course, English verb morphology is so impoverished that it's hard to see. But the idea is that when children say things like, what? "Doll sleep," which is the kind of thing that a two-year-old might say, that that's a way for the child to say "The doll sleeps." And they're using an infinitive. They're using an optional infinitive, so a verb that's missing the morphology that you would expect it to have if they were an adult. But the observation is that although they are confused about whether it's OK to have a main clause be an infinitive, what's interesting is that they know all kinds of sophisticated stuff about what it would mean for it to be an infinitive. So for example, when we were talking about German, I think I told you German's a V2 language. So in ordinary main clauses, the verb is in second position. In infinitives-- well, infinitives are usually not main clauses in German. So the verb is typically not in second position. It's at the end of the clause. It goes at the end of the clause. And German-speaking children go through this root infinitive stage, a stage when they're around two or three years old, where they use infinitival verbs a lot. And they put those verbs where they should. So they are confused about something. They're confused about whether it's OK for a main clause to be infinitive. But they're not confused about the rules for where the verb goes in German. So when they're using-- when they do use tense verbs-- and they do-- they put them in second position. And when they use the infinitival verbs, they put them at the end. And that kind of fact has been replicated in a bunch of different languages. So here's a German two-year-old saying-- the child wants to say "Thorsten has Caesar"-- I think Thorsten was a teddy bear-- and "have" is in the infinitive. And so it's at the end. But in that second example, the child is trying to say, "I have a big ball." And that's pretty good German, apart from mispronouncing the word for "big." The word for "have" is in second position. It's inflected. It's not infinitival. And it goes in second position. So yeah, this child is acting like an adult German, except for the fact about having an infinitive in a main clause. So there's a mistake, but it's a constrained kind of mistake. And this is pretty-- again, this is-- we're back now to studying this stuff by just studying what children actually say. This is a recording study of this German child, Andreas, who was two at the time-- 281 utterances. And a healthy proportion of them were these root infinitives. And if you look at them, it turns out that the finite verbs are in the second position mostly and the infinitives are in final position mostly. So he's confused about where he can use infinitives. But he knows what to do with them once he has them. Yeah, we have time for this, I think. OK, so all of this has been about studying children's naturalistic output. That is, it's been about studying what children actually say, so making careful recordings. So a lot of this involves putting tape recorders in the homes of parents of young children. The parents volunteer for this. Hopefully, they're paid. I don't know if they're paid or not. Anyway, recording devices are put in their houses. And you end up with many, many recordings of children speaking, parents speaking to children. And you transcribe these recordings. There's an online database of a lot of these classic recordings. It's called the CHILDES database, which I think is publicly accessible. Actually, if you Google it, you can probably find it. But there's another kind of work that people do-- and we have a lab here at MIT where this kind of work is being done-- where you do what you could think of as consultant work, kind of the work you're doing with the languages that you've been doing field work on, except the children are probably more likely to cry and to demand cookies and things like that. So some of this consultant work involves showing a child a story-- either telling them a story or showing them a series of pictures that demonstrate the story-- and then asking them whether something is true or not. So one classic way to do this kind of experiment-- you have somebody who is telling the story, and then another experimenter who has a puppet. And the child's job is to tell the puppet whether the puppet is right or wrong. The people who do this kind of work have discovered it's easier to get a child to tell a puppet that the puppet is wrong than to tell an adult that the adult is wrong. So you have the puppet say, I know what happened. And then the puppet describes what happened. And that way, you learn whether the child thinks that the sentence that the puppet utters is a true description of what has happened or not. So there's a lot of work, for example, on something that children do, which is kind of interesting. But if you show children a picture-- I've done this work with five-year-olds. This goes on fairly late. If you show children a picture of, let's say, four elephants-- and now, you will have to apologize-- you will have to forgive my inability to draw an elephant. Please pretend that these are elephants. Here, let's make it a story about three elephants. So there are three elephants, you tell the child. And you tell them there are-- and there are rabbits sitting-- here some rabbits. Why am I telling the story this way? Why did I not put these pictures on the slide? OK, and there are rabbits on two of the elephants. Yes? AUDIENCE: Those are remarkably good elephants and rabbits. NORVIN RICHARDS: Oh thank you very much. Thank you. After all that self-deprecation, that's exactly what I was hoping somebody would say. Please write that on the course evaluation: "Surprisingly good at drawing elephants and rabbits." So here are three elephants and two rabbits. You show a child a picture like this that has elephants and rabbits in it. And you ask-- the puppet says, "I know what's happening. Every rabbit is riding an elephant." And an adult-- you also do this experiment with adults, just to have a control. So you tell the adults, this is going to be for children, but please humor us. Here's a puppet. You have to tell the puppet whether it's right or wrong. The puppet looks at this picture and says, "I know what happened. Every rabbit is riding an elephant." And adults will say yes. But children at a certain age will say no. And then when you ask them why not, they'll point at this elephant and say, "This elephant doesn't have a rabbit." And so people are trying to figure out what is going on in the heads of these children. Why are they doing this? So this is an active area of research, trying to figure out what's going on. Yeah? AUDIENCE: Isn't that complicated by syntax [INAUDIBLE] because when I first heard that [INAUDIBLE] was a nightmare [INAUDIBLE] NORVIN RICHARDS: Oh yeah, you were traumatized by the guards standing in front of the building. I'm so sorry. Every rabbit is riding an elephant. Yeah, so the thing is the children-- how shall I say this? I mean, so first of all, if you have the puppet say this, the children-- so the children will say, no, that's not right. And when you ask them why not, they point at the elephant. They don't panic. They don't like start to cry. I mean, some of them do. You exclude those. But the ones who can handle this experiment, they have a theory about what the sentence means, apparently. And then if you give children another picture which is just like this one but there's an extra rabbit, then the adults-- there's the third rabbit. Then the adult-- that's about as large as an elephant. Then the adults and the children agree. They all say, oh no, now it's not true. It's not true that every rabbit is riding an elephant. So children apparently have some-- at a certain age, have something interesting that they're doing with qualifiers that is not what adults are doing. Yeah? AUDIENCE: In this picture with the rabbit, do the children point to the rabbit [INAUDIBLE] not true always? [INAUDIBLE] NORVIN RICHARDS: Or do they point to the elephant? Oh, I'm sorry. Yeah, I should also have said, you can also point-- you should give them a picture like this, where every elephant has-- where there isn't an extra elephant, but there is an extra rabbit. And then again, they will say, "No, that's not true." And they'll point to the rabbit. I actually don't know if you give them an extra of both of them whether they always point the elephant or the rabbit. That's a good question, interesting to find out. So this is the kind of thing that people study. They do all this interesting work with children, try to figure out what exactly is going on in their little heads. And one of the big takeaways from this research is that they-- first of all, they're not like adults, which any parent could have told them. But also, although they are not like adults, they have mistakes of certain kinds and other kinds of things that they don't make mistakes about. So they may use optional infinitives, if they're learning a language that has optional infinitives. But they put the infinitives where the infinitives ought to go in the sentence. They've got a particular thing that they're wrong about. And it's for a particular stage of their life. They eventually age out of it. Or similarly here, this is a very common phenomenon, that children, if you show them this kind of picture, will say "No, no" because of the extra elephant. And we're still trying to figure out what exactly it is that children are confused about there. So there's lots of work to do. If you are interested in linguistics and interested in children, as I say, there's a lab that works on this kind of thing. And they're always looking for volunteers, people who want to help out. And there is a class specifically about language acquisition, undergrad level class for language acquisition, where people study how to do this kind of thing. Questions about any of this? When I put up the slides, I'll try to find the actual pictures with the elephants and the rabbits. OK, all right, well then, let's end a little bit early today. Next week, we will start up on Tuesday. It'll be our last day. I'm debating-- oh, actually, maybe I can get feedback from you guys about this. On our last day, there are two things that I could do. I could try to do a summary of everything that we have done and, well, probably a brief summary in which we wouldn't talk for very long, and encourage you again to do the evaluation forms, and then send you on your way. Or I could do a class on signed language, which is a topic that I haven't gotten to during the semester. But we could spend a spend a class doing that. Think about that and send me an email. Some of you are using your facial expressions to give me ideas about which of those things you'd prefer. But send me email if you have a preference one way or another about which of those things I should do. And I will take your preferences into account. All right, have a good rest of your day. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_10_Phonology_Part_3.txt | NORVIN RICHARDS: All right, now my mic is on. That has an effect on my voice. I'm being amplified, right? Cool. So it'll be easy for you to tell whether I've got my mic on or not. So here's another assimilation rule. We said, in Arabic, there's an assimilation rule-- wow, I'm really being amplified-- in Arabic, there's an assimilation rule that creates or causes two consonants to be identical to each other. This one is not that. The consonant at the end of that prefix is always nasal. It just assimilates in place to the following consonants. You get "impossible" and "incredible," where you get a velar nasal version of "n." It's not "incredible" [with alveolar nasal "n"] at least, unless you're speaking very carefully. That sound right? Well, we're talking about assimilation rules. I want to show you another example of vowel harmony. We looked at one example of harmony before, which was from Turkish. And it was the observation that the Turkish plural suffix is sometimes "-lar" and sometimes "-ler," and it depends on the vowels that are in the noun that's being made plural. So "lions" was "aslanlar"-- which I cannot spell all of a sudden. But "roses," I think, was "gyller." It's either "roses" or "flowers," I forget which. Oh, we have a native speaker. What does this mean, "gyller"? Is it "rose" or-- "rose," OK, good. So yeah, "aslanlar" but "gyller"-- why? Well, because this, we were going to say, is harmonizing with the vowels that are in the noun in question. So in particular, it's deciding whether to be back or front depending on whether the vowels in the noun are back or front. So I want to show you another example of vowel harmony, and we'll talk about it a little bit more. But that's what vowel harmony was. And as you can see, it's a kind of assimilation rule. So here's a vowel, and it's deciding whether to be-- it's gaining its front or back property for-- it's becoming the same as the vowels that are in the noun. So let's talk about another vowel harmony case. This is from Finnish. So here are the Finnish vowels. You can see it has-- what does it have-- eight of them, and they can all be long or short, but long or short for this just means how long you hold them. Since we're talking about Finnish, I have to tell you my favorite Finnish word, which is this, [NON-ENGLISH SPEAKING]. I was taught that word by a linguist who specialized in Finnish-- Erica Mitchell-- It's the only Finnish word I know. [NON-ENGLISH SPEAKING]---- I figure it'll come in conversation at some point, if I ever get back to Finland. It means wedding night intention. So I'm not sure what kind of conversation that will be, but I have to work on that. So there's the Finnish vowel inventory. One of the things you can see about this inventory-- it's got front vowels and back vowels. The vowels can be high or mid or low. And it is like English in that its back vowels are rounded if they are not low. So the back vowels are "u," "o," and "a." And so it has a high back rounded vowel-- for example, "u." But it doesn't have a high back unrounded vowel, "ouh." So far so much like English. Difference from English is that it also has front rounded vowels. So front high rounded vowel, front mid rounded vowel, "eouh" and "ouh." So English doesn't have those, but Finnish does. On the other hand, it also has front high and mid unrounded vowels, "i" and "e"-- so the pairs of vowels there. So I'm going to erase my favorite Finnish word, just so as not to confuse later classes. OK, so that's the Finnish vowel inventory. Now Finnish has vowel harmony like Turkish. It has a lot of suffixes that harmonize with the noun and, in particular, Finnish is kind of famous among linguists for having a lot of suffixes on nouns that are thought of as cases. So if you study languages like Latin or Russian or I bet Ukrainian, you need to learn that nouns come in a bunch of different forms depending on how they're being used in the sentence, the nominative or accusative-- they get little markers on them saying whether the subject or the object or the indirect object or whatever. This is what we call the case of a noun. Finnish-- languages that have case, they often have two cases or three or five or six. Finnish, depending on how you count, it has something like 12 or 13. It's just nuts. And that's partly because it has a bunch of case suffixes that, in English, would be prepositions. So it has a case suffix that means "on." This is the case suffix that means "on." And it is either pronounced "laa" [WITH LOW FRONT VOWEL] or "lah" [WITH LOW BACK VOWEL],, depending on the noun. So the word for "table" is [NON-ENGLISH SPEAKING] and "on the table" is [NON-ENGLISH SPEAKING].. The word for "street" is [NON-ENGLISH SPEAKING] and "on the street" is called [NON-ENGLISH SPEAKING].. I should be careful saying things like the word for "street" is [NON-ENGLISH SPEAKING].. I think the word for "street" is actually [NON-ENGLISH SPEAKING] if you have it by itself. But if you add the [NON-ENGLISH SPEAKING],, it changes the "t" to a "d," something like that. There are all kinds of sneaky things that happen in Finnish, that I forget, when you add them. But anyway, if you add [NON-ENGLISH SPEAKING] to street, you end up with [NON-ENGLISH SPEAKING].. If you add [NON-ENGLISH SPEAKING] to table, you get [NON-ENGLISH SPEAKING],, and the change in the vowel is conditioned by the vowel of the noun. Now I've kind of color-coded the vowels here. You can see there are some front vowels that are red and some back vowels that are blue. And if you have a noun where all the vowels are red, like "table," then the vowel of the suffix is also red. You have a vowel where all the-- sorry, a word where all the vowels are blue, then the suffix is also blue. So if all the vowels in the noun are back, like in "the streets," then the suffix gets a back vowel. If they're all front, then the suffix gets a front vowel. So far so good? Now here's the thing. There are also the green vowels. Why do I have green vowels? The red vowels all have the property that there is a blue vowel-- so a back vowel-- that is identical to the red vowel in every way except that it's back instead of front. So the red vowel "eouh" for example, the front high rounded vowel, has a corresponding blue vowel that is high and rounded and different only in that it's back. That's "u." And similarly, there is a front mid rounded vowel, "eouh," and there is another vowel that is identical to it in every way except that it's back instead of being front. That's oh. But the green vowels don't have any such correspondence. So "i," the front high unrounded vowel, there is no vowel which is identical to that vowel in every way except that it's back. There isn't an "ooh." Does that make sense? So the red vowels and the blue vowels are identical in every way except for front versus back. The green vowels don't have a corresponding back vowel that's identical to them except being back. Yeah? AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Well, it would be-- let's see, it would be "oueh." It's not a popular vowel, but there are languages that have it. The IPA symbol for it, I think, is something like this. So it exists. We haven't talked about a language that has it. I'll erase it quickly before it upsets anybody. So it's physically possible to make these vowels in their languages that have them, but Finnish doesn't have them. And so the question is, what do you do when you add a suffix to a noun that has these vowels? And the answer is that these vessels are just invisible, basically. So at the station is the less interesting case-- you have a couple of front vowels that do have corresponding back vowels, "y" and "ae," and then you have this "i," which doesn't have a corresponding back vowel. You have two red vowels and a green vowel. And you get a red vowel in the suffix, so you get [NON-ENGLISH SPEAKING] for at the station. But for "on the child," [NON-ENGLISH SPEAKING],, well, you have a blue vowel that is a back low vowel, "a," and then you have a green vowel, which is front, but the green vowel doesn't have a corresponding back vowel, and so it's like it doesn't count. The green vowels can go with anything. So you have vowel harmony for the vowels that have corresponding front and back versions, but for those that don't have corresponding front and back versions, it's like you get to ignore them. They're not there. Is that clear? That makes sense? So there are-- I mean, it's as though you can say as long as you only look at blue and red vowels, all the vowels in a word must be blue or red. But you don't look at green vowels. You don't look at vowels that don't have a corresponding back vowel that is identical in every way except for being back. So you get [NON-ENGLISH SPEAKING] for "on the child." There are also words in which all the vowels are green, like "at the family," and the fact is that for suffixes, you get front vowels corresponding to those. And the green vowels are front, which is itself kind of interesting. So at the family is [NON-ENGLISH SPEAKING].. You get "lah" for that one. Is this clear? That's the Finnish system. If any of you ever want to go learn Finnish, you're now all set up. Does anybody speak Finnish? Here I am pretending to speak Finnish. Yeah, so just to summarize that again, if a vowel is not round and not low, then it is front. So one way to think about what's going on in these suffix versions at the bottom is to say there's a rule saying give all the vowels in the word the same value for back, "o," except don't create vowels that don't exist in Finnish. So when you're doing "on the child," it would be nice if-- the fact is that you've got a back vowel and then a vowel and then a back vowel. But there's nothing you can do about the front vowel, eh. There isn't a corresponding back vowel in Finnish. So it's like the rules are-- first of all, remember Optimality Theory. So yeah, the generalization goes, give all the vowels in the word the same value for back, but don't create non-Finnish vowels. Don't create back vowels that are not round and not low. Don't create "ooh" or "eouh." There's that symbol you were wondering about. If we want to talk about this in terms of Optimality Theory, what we say is the most important thing is don't create back non-low unrounded vowels. Don't create "ooh" or "eouh." Finnish doesn't have those vowels. Don't do that. But as long as you-- so that's the most important priority. That's the highest ranking constraint. But as long as you don't do that, make sure all the vowels have the same value for back. They're all either back or front. That's one way to talk about what's going on in Finnish. Now there's a popular approach to phenomena like vowel harmony that I just wanted to introduce you to, partly because a version of it will float up again when we talk about syntax, I think, which involves saying it's as though-- and actually semantics as well-- it's as though we want to talk as though features of sounds can sometimes show up in multiple places. So here's the Finnish word for "on the table" again, [NON-ENGLISH SPEAKING]. And if you were listing all of the properties of all of those vowels-- that's what I've done here, and I just haven't listed the consonants just because there wasn't room and I'm not good at graphic design. But first of all, "ouh," is round and mid and not back. That is front. And the second vowel is round and high and not back. That is front. And "ae" is not round and is low, and is not back. That is, it's front. This is just a little list of all the properties of each of those vowels. And so you could specify all of these properties of these vowels this way. But there's a popular thought about what's going on here, which is to say it's as though Finnish wants the value of back to be kind of shared among all the vowels. That is, you're just going to state it once. This is a word in which all the vowels are front or all the vowels are back. So you say somewhere, Finnish doesn't allow you to state over and over again this vowel is front and this vowel front and this vowel is front and this vowel is front, you just have to say once, for the whole word, these vowels are front. And then there's this thing about, oh, and I don't like certain kinds of vowels, so those vowels get to be specified. But words just specify whether they're front words or back words. You don't specify it over and over again from one vowel to the next. This is one popular way of talking about how we want to do vowel harmony. It's as though there's only one back feature in the word, and all the vowels that can share it do. The name for this-- it's called autosegmental representation. This is the kind of moment where I think I'm swimming against the stream of your normal instincts for classes. This is a class in which there aren't quizzes or exams, so putting something like that up on the slide is triggering all your instincts to write that down and memorize it because you figure it'll come up on the test, but there is no test. So memorize that if you would like to, if you want to impress linguists at parties or if you want to make your TA happy, but there won't be a test. You will never be counted off on the paper for failing to use the phrase autosegmental representation. On the other hand, it is a way to impress people at parties. So that's one form of harmony. It's called vowel harmony. We've seen it now in Turkish and in Finnish. Those are not related languages. Vowel harmony is crosslinguistically not-- it's not hugely common, but it's not rare either. It's popular in the Turkic family. There's a very large family of languages that are related to each other and they all have various kinds of harmony. There's lots of interesting work on exactly what kinds of harmony you get in which languages. I want to show you another kind of harmony which is a lot rarer. This is from a language, the Navajo language, which is an Indigenous language of the American Southwest. Here are a bunch of verbs in Navajo in what we could think of as the past tense. And you can see that all of these verbs-- so what I've done is boldfaced a part of the verb that you can think of as the part of the verb that tells you who the subject is. So there's a morpheme "sé" that's showing up in the first three verbs, which tells you that the subject is "I." And there's a morpheme "síní" in the next trio of verbs that tells you that the subject is "you." And then there's a morpheme "soo" in the last three verbs. It tells you that the subject is "you two"-- that's you dual. To say that this is a single morpheme is a little bit misleading. The "s" at the beginning of this morpheme is probably partly a marker of what I'm translating here as past tense. Navajo verbs are extraordinarily complicated. So these particular verbs are in a class that takes this "s" version of the past tense, what are called s perfectives. There are also y perfectives. There are other kinds of perfectives. But anyway, this class of verbs takes these past tense morphemes, and this is what they look like. Are there any questions about any of this? Navajo has tone. That's what those markers are on the vowels. So the markers mark high tone. There's another marker that you're seeing underneath the vowel in the verb for "stand." That's a marker of nasalization. So "I stood" is something like [NON-ENGLISH SPEAKING].. So you have to nasalize the "eh," say that. Here's some Navajo. Here's some more Navajo. "I boiled it" is [NON-ENGLISH SPEAKING].. "You boiled it" is [NON-ENGLISH SPEAKING],, and "You two boiled it" is [NON-ENGLISH SPEAKING].. So you can see, in the three verbs that I had on the previous slide and that are still up here, there are these three morphemes, "sé," "síní," and "soo." But for "boil," the morphemes are [NON-ENGLISH SPEAKING] and [NON-ENGLISH SPEAKING]. This is called sibilant harmony. What's going on here-- and here's another example of it-- you get "sé," "síní," and "soo" for "played" and "stood" and "crawled around," but [NON-ENGLISH SPEAKING] and [NON-ENGLISH SPEAKING] for "boiled" and "worked." And the relevant difference between "boil" and "work" on the one hand and "play," "crawl around," and "stand" on the other is that "boil" and "work" have, in the verb, a sibilant, which is postalveolar. So "boil"-- the stem you can think of as [NON-ENGLISH SPEAKING],, and "work"-- its' [NON-ENGLISH SPEAKING] is the verb stem, [NON-ENGLISH SPEAKING]. So "boil" has a "zh" in it and "work" has a "sh" in it, whereas "play" and "crawl around" and "stand" either don't have sibilance in them at all or, in the case of "stand," it has a sibilant, but it's not a postalveolar sibilant, It's an alveolar sibilant, "z." This is called sibilant harmony, or I think we've also called these strident. I'm sorry, that's the term you're familiar with. So this phenomenon is called sibilant harmony-- so sibilant is another name for strident, sorry. So let me just say all that again, there are these postalveolar stridents, "zh" and "sh" in "boil" and "work," and those are triggering the postalveolar versions of these morphemes, the [NON-ENGLISH SPEAKING] and [NON-ENGLISH SPEAKING] instead of "sé," "síní," and "soo." I started this by saying vowel harmony is pretty common. There's a lot of work on vowel harmony because you find it in lots of unrelated languages. Sibilant harmony is quite rare. Navajo has it. A few other languages have it. But it's similar in the sense that it looks as though, if you have a sibling-- morphemes in these prefixes whose form is being conditioned by corresponding sounds in the verb stem. So for vowel harmony, it's the vowels. For sibilant harmony, it's these stridents. So if there's a strident or sibilant, an "s," "z," "sh," or "zh"-- those are the Navajo sibilants-- in the verb stem, then any of the sibilance and the subject prefix have to match it. They're getting the same thing. Vowel harmony and maybe this idea of sharing a feature across sounds-- it's not just for vowel harmony. Maybe it's for assimilation generally. There might be some other kinds of harmony we'd want to apply it to. Wanted to show you another bit of Navajo before we leave Navajo behind. Here's how you say you are playing in Navajo. So we're switching tenses now. We're going to get away from sibilant and harmony. This is the present tense, the imperfective aspect people call it when they work on Navajo. It's not exactly tense. So this is how you say you are playing in Navajo. It's [NON-ENGLISH SPEAKING]. This verb has three morphemes in it. You can think of the "né" at the end as the part that really means "play." There's a morpheme at the beginning, "na-," which is a prefix. We don't have to worry for this about what it means. It's a marker of a certain class of verbs what are called atelic verbs. These are verbs in which you can continue doing something and there is no natural endpoint to what you do. So if you compare something like walking around to walking to work-- if you walk to work, then when you get to work you stop. And if you interrupt me while I'm in the middle of walking to work, then I did not walk to work. That's a telic predicate, as opposed to walking around. If I'm walking around for an hour and then you stop me, well, I still walked around for a while. So that's atelic. It doesn't have a natural endpoint, something you're trying to achieve. So "na-" is a marker of atelic verbs like play or walk around. Feel free to ignore all that if you want to. So there's a "na-" at the beginning of this verb. There's a "né" at the end, which is the part that really means play. And then in between them, there's an agreement morpheme, something that's telling you what the subject is. And that morpheme is "ni" for you, and it's "sh" for I, and it's "oh" for you two, and it's "ii'" for we two. So you get [NON-ENGLISH SPEAKING] for those forms of the verb. So far, here's a bunch of Navajo. Oh, one thing I have to ask you to believe me about because it would be hard to demonstrate is that the ee in we two are playing is underlying-- so that apostrophe there, this is Navajo orthography. It represents a glottal stop, but that glottal stop is originally a "d." I'll just ask you to trust me about that for the next little while. Now I started with play, which is a comparatively simple verb. The verb really is "né." But there are some other verbs that we can talk about. You've seen versions of them in the preceding slides-- work and investigate-- that I now want to turn to. Work and investigate-- we can think of them as starting with clusters of consonants. So you are working. You've got the "na" and the "ni" that we have up above, and then the verb stem is [NON-ENGLISH SPEAKING].. People who work on Navajo really think that that "l" is a separate morpheme. They call it the classifier. Really, for our purposes, all that amounts to is that there are a bunch of verbs that start with "l," and then the "l" is followed by another consonant. We don't have to worry too much about what that is. So for our purposes, "working" starts with two consonants, [NON-ENGLISH SPEAKING]. And "investigating" also starts with two consonants. The first one is a voiceless [NON-ENGLISH SPEAKING] so this [NON-ENGLISH SPEAKING] is "You are investigating it." Why am I telling you all this? This is where the action is. "Ni-" is the prefix for you singular, and it ends with a vowel. But if we switch to the other prefixes, which all end in consonants, then some interesting things happen. So I am playing-- we saw that the subject agreement prefix for "I" is a "sh." So you get [NON-ENGLISH SPEAKING].. The subject agreement prefix "sh," if you put it before it [NON-ENGLISH SPEAKING],, you might have expected that to be [NON-ENGLISH SPEAKING].. But it isn't. It's [NON-ENGLISH SPEAKING]. So the "l" vanishes. Similarly for investigate, we've decided that "investigate" is [NON-ENGLISH SPEAKING]. So it starts with two consonants, a voiceless "l" and then a "k." If you also have a [NON-ENGLISH SPEAKING] before all that, well, you get rid of the voiceless "l." So you get [NON-ENGLISH SPEAKING] is "I'm investigating it." The reason I'm doing these particular prefixes is that you can see different things with the different ones. So if you add "oh," what happens is that the "h" goes away, but the distinction between the voiced "l" in "work" and the voiceless "l" in "investigate" also goes away. So "work"-- we decided, looking at you, starts with a voiced "l" It's [NON-ENGLISH SPEAKING].. But "You two are working" is now [NON-ENGLISH SPEAKING].. So the "h" is gone, but that voiced "l" has become voiceless. It's now barred. And we're getting a voiceless "l" down there in "You two are investigating it" too. It's now [NON-ENGLISH SPEAKING]. And the thing that I said is underlyingly "eed," well, the "d" is gone, but the distinction between the voiced "l" and the voiceless "l" has gone in the other direction. So now they're both voiced. So "We two are working" is now [NON-ENGLISH SPEAKING] and "We two are investigating" is now [NON-ENGLISH SPEAKING].. Do you remember when I said that Navajo verbs are extraordinarily complicated? Navajo verbs are extraordinarily complicated. There are all these morphemes, but they don't coexist peacefully. When we were talking about Native languages in which you have a whole bunch of morphemes, and they're easy to separate from each other, and there's nothing very spectacular that happens, you just attach them to each other-- this is the natural enemy of that kind of language. It's the opposite. Here, we have a bunch of morphemes, and you put them together, and there's like open warfare between them. There's all kinds of strange stuff that happens as they interact with each other. So let me summarize what we've just seen. If you have the "sh" followed by two consonants, like if you're going to say "I am investigating it," so "sh" is the "I," and "investigating," we decided, starts with [NON-ENGLISH SPEAKING] and then "k," you get rid of the first consonant. So you get [NON-ENGLISH SPEAKING] not [NON-ENGLISH SPEAKING]. If you have an "h" followed by two consonants, like in "We two are working," so where you've got an "oh" followed by [NON-ENGLISH SPEAKING],, well, the "h" goes away, but the "l" devoices. So instead of [NON-ENGLISH SPEAKING],, we get [NON-ENGLISH SPEAKING]. And if you have a "d" before two consonants, well, the "d" goes away, but the "l" that's in the first consonant will become voiced. So it changes from [NON-ENGLISH SPEAKING]---- so "You two are investigating it" to [NON-ENGLISH SPEAKING].. Do these rules remind you of anything other than the Navajo that we've just gone through? So we have one rule that says if you have a "sh" followed by two consonants, get rid of the first consonant, and another rule that says, if you have an "h" followed by two consonants, get rid of the "h" but devoice the first consonant, and then another rule that says, if you have a "d" followed by two consonants, get rid of the "d," oh, but voice the first consonant. Yeah? AUDIENCE: I might be barking up the wrong tree, but was there a language that disallowed [INAUDIBLE]?? NORVIN RICHARDS: There was indeed. You're not barking up the wrong tree. You're barking up the correct tree. It's not something I've ever had to say before, but it's true. You are barking up the Yawelmani tree. So we were talking about Yawelmani before, and for Yawelmani, similarly, we had a whole bunch of phonological phenomena, and we said we could write rules to handle all of these facts and maybe we should. In Yawelmani, it was like, if you have three consonants in a row and one of them is an "h," get rid of the "h." If you have three consonants in a row and none of them is an "h," you insert a vowel. There are various things that Yawelmani does to avoid having three consonants in a row. Navajo is doing the same thing, you're right-- the same thing in the sense that Navajo and Yawelmani both dislike sequences of three consonants in a row. None of these rules would handle anything in Yawelmani. So this is not what Yawelmani does to deal with three consonants in a row. But if we're willing to think of-- we're willing to think of phonology the way I've been advertising, which is languages have certain problems, and then they have means to deal with the problems, then we can see Yawelmani and Navajo as having something in common. They have a kind of problem that they both care about. They don't like sequences of three consonants. They deal with it in different ways, but we can see them as having that property in common. Yes? AUDIENCE: Is this a property that's been observed in other Indigenous languages of the Americas, or [INAUDIBLE]?? NORVIN RICHARDS: I think it's not specific to Indigenous languages of the Americas. I think it's cross-linguistically not uncommon for languages to dislike sequences of three consonants. And we talked a little bit earlier on about the fact that some of your best cues for the value of a consonant is its effects on the nearby vowel. So I think we played around with [INAUDIBLE],, showed you some spectrograms from me saying "bab" and "dad" and "gag." I showed you that during the stop itself, you can't hear a whole lot. It's what the stop does to the vowel that's nearby. If you have three consonants in a row, then at least the one in the middle is not next to a vowel, and so that could be a reason why it's bad to have three consonants in a row. It's perfectly possible there are languages out there that are fine with three consonants in a row. English is one of them for example. So we have words like "strengths"-- long strings of consonants. Plenty of other languages like that. But yeah, this is a kind of thing that languages do. And we're seeing it in Yawelmani and Navajo. So this is one virtue of the approach to phonology that I was advertising before, where we say, yeah, languages have things they dislike, and then, on the other hand, they have techniques for dealing with what they dislike. Yawelmani and Navajo have a property in common, namely the property of disliking three consonants in a row. They deal with that problem in different ways, but they have that property in common, at least. And so there's some kind of insight that we're gaining by being willing to think about phonology that way. If we didn't think about phonology that way, if we just made these rules, Navajo and Yawelmani would seem not to have any properties in common at all. We'd obscure that fact about them. That make sense? So what have we done today-- we've looked at a variety of phenomena. I tried to introduce you to the idea of autosegmental representation, this idea that a single feature can get spread across different places in a word, and we've just gone back to talking about this idea of what are called ranked constraints, this idea that languages might have a number of goals in life, and the way to talk about differences between languages might just be in terms of how they prioritize their different goals. So that was an important part of how we talked about Finnish, where the rules seem to be do vowel harmony, oh, but don't create non-Finnish vowels. So the most important thing is don't create vowels that Finnish doesn't have, but modulo that, make all your vowels the same for whether they're front or back. And similarly, we just convinced ourselves that Navajo and Yawelmani, although they look different in lots of other ways, might have a high-ranking constraint in common, namely, don't have three consonants in a row. So what I want to do now is show you another domain in which people have fruitfully made use of this idea of ranked constraints-- that is, of languages having a bunch of comparatively simple conditions on what makes a word a well-formed word that interact with each other to give behaviors which can be quite complicated. And this all has to do with stress. So we've already talked some about stress. It was part of the flapping rule. So when we were dealing with how to deal with the difference in the pronunciation of the "t" in these two words, "atom" and "atomic," we said the rule that converts "t" to a flap needs to be sensitive to where stress is, so it only happens after stress, doesn't happen before stress. And that's why we get it in "atom" but not in "atomic." Stress often has effects on vowel quality. So if I were actually writing "atom" and "atomic," it would look something like that. You can see that "atom" and "atomic," if we ignore the "-ic," "atom" and "atomic" have almost nothing in common apart from an "m." So the two vowels are different and the sound that's spelled with a "t" is different in the two words. I think I referred to things like this when I was talking about the one good thing about English spelling, which is that a given morpheme is typically spelled more or less the same way, even if it's pronounced quite differently. Stress is also important for intonation. We won't have time to talk seriously about intonation in this class. We probably won't have time to talk about it at all. Intonation is the study of the rules for how the pitch of your voice rises and falls as you speak, which, as you can imagine, is a really complicated topic, and it's really interesting. One kind of thing for which-- here's a quick and easy example of stress being important for intonation. There's a kind of-- it's called evocative chant. It's used in various circumstances, but one is when you are calling. I especially think of it as when you're calling a child-- like you're a parent trying to get the children to come home. You say things like-- you say their name and you say "Lau-ra." There's a tune to that. If I were putting the tune here, you might say yeah, there's a higher note, "Lau-," and a lower note, "-ra." That's the way to call Laura's name. You use that when you're calling children. You use it when you're selling hot dogs. So if you're in the ballpark, you say "hot dogs, popcorn, peanuts." There's this high-low tune that you sing when you're doing things like that. How do you call Pierre? "Pierre." It isn't "Pi-erre," yeah? It's "Pierre." So its high goes here. And then maybe also a low. And over here there's, I don't know, a low. "Pierre." What's the difference between Laura and Pierre? I mean, Laura is a girl and Pierre is a boy. But what else? AUDIENCE: Stress is on the end in "Pierre." It's on the first syllable with "Laura." NORVIN RICHARDS: Yeah, so the high-- we were just talking about how you do the vocative chant. The high is on the first syllable of Laura. The high is on the second syllable of Pierre. Why? Let's chant some other names, longer names. Genevieve. "Genevieve." Is that right? "Genevieve," I think. Maybe this is the low here at the end. "Genevieve," hi ho. Yeah. If you're selling vegetables at the ballpark along with hot dogs-- AUDIENCE: "Asparagus." NORVIN RICHARDS: "Asparagus," I think. This is low. AUDIENCE: "Cucumbers." NORVIN RICHARDS: "Cu--" AUDIENCE: "-cumbers." NORVIN RICHARDS: "Cucumbers." Think is it-- wait, is it "cucumbers?" Or is it "cucumbers"? AUDIENCE: No, that sounds good. Yeah. NORVIN RICHARDS: Cucumbers. Yeah, so this is high and this is low, Yeah? Let me do one more. Suppose you're selling jalapenos. AUDIENCE: "Jalapenos." NORVIN RICHARDS: "Jalapenos." Here's the high. Forget about the lows for a second. Where does the high go? Yes. AUDIENCE: The stressed syllable. NORVIN RICHARDS: The stressed syllable, yeah? So it goes on "Genevieve," "Laura," "Pierre," yeah. "Cucumbers," "asparagus," and "jalapenos," that's where the high starts, yeah. And before that high you have lows. Yeah. AUDIENCE: There are some situations where we keep the high for a few syllables. NORVIN RICHARDS: Yeah. AUDIENCE: After the "s." NORVIN RICHARDS: Yeah, like where? AUDIENCE: "Cu"-- wait, oh, "Genevieve." NORVIN RICHARDS: "Genevieve." AUDIENCE: And "asparagus." NORVIN RICHARDS: And "asparagus." No, "asparagus." Yeah, so for those you've just get an "l" at the end and it's high until that. Maybe in all of these, you get an "l" at the end, right? Even in Pierre, where there's a high at the end, there's also a low. I think you don't say Pierre, right? It's Pierre. You have to go back down to low. So the vocative chant goes put a high on the stressed syllable, put a low at the end, yeah? And then what's the difference between "cucumbers" and "asparagus"? Or between "Genevieve"-- wait, do we want "Genevieve"? No, we don't want "Genevieve." Sorry, Genevieve. My sister's name is Genevieve. Nobody tell her that I said that. What's the difference between "cucumbers" and "asparagus"? Yeah. AUDIENCE: Well, we haven't learned syllable division yet. NORVIN RICHARDS: Yeah. AUDIENCE: Singing "cucumbers" added part of that second syllable. NORVIN RICHARDS: Yep, yep, that's probably right. AUDIENCE: Whereas in "Genevieve" that is not part of the second syllable. Same with "asparagus." NORVIN RICHARDS: Mm-hmm. Yeah. Yeah, that's probably right. Yeah, yeah, that's probably true. Does it? AUDIENCE: We have some data like you're high until you hit an obstruent. And then-- NORVIN RICHARDS: Ooh, oh, gosh. You're high until you hit an obstruent. AUDIENCE: So low until they stress. And then-- NORVIN RICHARDS: Oh, man. Let me try to think of some more examples. That can't possibly be right, but it covers the data that we've got so far. Yes. [LAUGHS] AUDIENCE: The last one. NORVIN RICHARDS: The last one has to be low, yeah. So the last one has to be low. That's right. And the high goes on the stressed syllable. And then you stay high maybe, so far. For most of these what happens is that you stay high until you hit a low, right? So it's "Genevieve." The "vieve" is at the end, or it's "asparagus" or it's "jalapenos." And these are low. So you're low before you get to the high. And the real question is, what's the difference between "cucumbers" and "asparagus," where this high spreads and this high just stays here? Why is this low not just-- why is there a low not just here, but also here? And I think that's true, right? We agreed that it is "cucumbers." It isn't "cucumbers." Let me try to think of another word. AUDIENCE: "Endocrinology." NORVIN RICHARDS: "Endocrinology." What's a better word? That's a good word. I'm sorry. I'm trying to think of another word that has the property that I'm looking for. Lend a, "interdisciplinary." "Interdisciplinary. Interdisciplinary. Interdisciplinary." Yeah, no. "Interdisciplinary." So suppose I'm selling interdisciplinary. [LAUGHTER] Or suppose I've named my child "Interdisciplinary." It's "Interdisciplinary," is that right? Yeah, I think that's right. So my-- that's a nice name for a child. I mean, "Interdisciplinary." It's low here, yeah? Man, so let me tell you what I think is going on. We've talked about words as though they only have one stress, "cucumber," "asparagus," "Genevieve." Let's take "interdisciplinary." How many stresses are there in "interdisciplinary"? Where's the main stress in "interdisciplinary"? AUDIENCE: It's "dis" here. NORVIN RICHARDS: Yeah, it's "dis," it's here, where the high is. So the high goes on the main stress. What other syllables in this are stressed? AUDIENCE: "-nary." NORVIN RICHARDS: "-nary." There's another stress. And maybe here, I don't know. AUDIENCE: Really, OK. NORVIN RICHARDS: Yeah. AUDIENCE: Is this high on the main stress and then it stays high until it reaches the next like substress? NORVIN RICHARDS: I think it's something like that. So it's high and it stays high until you get to the next stressed syllable. The next stressed syllable is where the low starts. Yeah. And there's always a low at the end. So for the difference between "cucumbers" and "asparagus," I think, is that "cucumber" has a secondary stress on the second syllable, whereas "asparagus" only has one stress. It's here. Yeah, and that might be related to your observation. This has a closed syllable here and closed syllables are often associated with stress. This will not be on the final exam. There is no final exam. Yeah. But here's an example of stress being useful for understanding a phenomenon, in this case, the phenomenon of the vocative chant. All of you are about to be invited to come up with a research topic, something to work on. This is something you could try working on with the person that you're working with. Ask them how they call their children, right? And also how they sell things. And find out whether there's a special tune for that, because in many languages there is. It would be interesting to see, yeah. OK, all right. So, yeah, this was all an attempt to convince you that it might be interesting to think about stress. It's relevant for various types of phonological phenomenon. It was relevant for the flapping rule in English. We've just seen it's relevant for the rules for vocative chant. I'm going to leave those on the board just so future classes can wonder what the heck we were doing. In languages where words have one fixed position for stress-- so there are languages out there where the position of stress is always fixed. These are data from a database called WALS, the world atlas of language structures, which is free and accessible. I recommend going and having a look at it. When I got these data, there were 282 languages described in the WALS database. The WALS database describes languages with respect to a variety of phenomena, not always accurately, I have to say. So if you have a particular language you're interested in, you should double check. But it's a nice first pass, looking at a bunch of data. In that database, you can see there are, among languages that have just one place where stress always goes, there are languages where that one place is the first syllable. That's pretty common. About a third of the languages like that, that do that. The most common type of stress for a language like that is penultimate stress. Languages like Zulu and Polish have stress on the next to last syllable. And then you can see there are some other patterns that are less common. So the last syllable is not uncommon. And then there are some really weird systems like Hocak, which is a Native American language, has stress on its third syllable. That's quite rare. There are some languages that have stress on the antepenultimate syllable, antepenultimate, yeah, and some languages that have stress on the second syllable. So these are all patterns that you find. As you can see, these are patterns that are best described as stressing a syllable that is counted either from the left edge or the right of the word. There are other systems that are not hard to imagine but that don't exist. And so people who are developing theories of stress typology, one of their goals, for example, is to have a theory that rules out languages with the rule, stress the syllable that's closest to the middle of the word. It's distressingly easy to come up with a theory that predicts the existence of languages like that. And they don't exist. So you want your theory to not capture languages like that. One way to describe this kind of pattern is to have constraints, you know, like the constraints that say things like don't have three consonants in a row, constraints that say things like put stress as far to the left or as far to the right in the word as possible. And then have that constraint interact with other constraints that say things like don't stress the last syllable or don't stress the first syllable. So for example, the quite common pattern that you get in Zulu and Polish where you stress the next to last syllable. That's one that says put stress as far to the right in the word as you can, oh, but don't stress the last syllable. So don't stress the last syllable, that's the most important thing. But putting aside that, put stress as far to the right as you can. That means put stress on the next-to-last syllable. That's one way of describing that kind of stress system. Yeah. Yes. AUDIENCE: Are these languages sensitive to deal with a verb which comes in the last place, anyway it felt like it was for that. NORVIN RICHARDS: Yeah, so there are languages-- we looked at this in Lardil, that just don't like words of only one syllable. They rule them out. But you're absolutely right. So languages that have systems like these, typically it's like stress the next-to-last syllable if there is one. So in one monosyllabic word, you stress the one syllable that you've got. Yeah, that's a very good point. That's a thing that you find in these languages. Yeah, yeah, yeah. It would be nice if there were some correlation between having a stress system that wanted to avoid final stress, let's say, and being like Lardil in not liking monosyllables, right? It would be nice if the world worked that way. As far as I know it doesn't. So it doesn't seem as though the condition that says it would be better for your words to be at least two syllables long, it doesn't seem to be motivated by these considerations. I sure wish that it was, because it would be nice if there-- then we would have an explanation for part of that. But it's not clear to me that there is any connection between those things. Yeah. Now those, we were just talking about languages in which stress is only in one place. But as we actually saw as we were looking at the vocative chant, it's possible for a language to have multiple stresses in a single word. English does that all the time. And there are languages out there that have patterns where you stress more than one syllable. Here are some examples from Pintupi, which is an Aboriginal language of Australia, in which the basic rule is stress the odd-numbered syllables. So stress, put stress on the first, third, fifth, seventh, and et cetera, syllables of the word. So again, you can think of that as a little pile of constraints. Stress the first syllable, don't stress the last syllable, and don't allow yourself to have two syllables in a row that are the same, that is are both stressed or both unstressed. So, again, you don't always succeed in doing all of these things. In a language like this, whenever you have an odd-numbered of syllables in your word, you can achieve the first two-- stress the first syllable, don't stress the last syllable-- and actually the next one, don't allow two stressed syllables in a row. But the last one, don't allow two unstressed syllables in a row, you achieve that in the words that have an even number of syllables. No, yeah, an even number of syllables, but not the ones that have an odd number of syllables. If there's an odd number of syllables, like in the second word there that has three syllables, you only stress the first syllable. And you put up with the fact that it ends in two unstressed syllables, because there's no way to avoid that while also satisfying these higher-ranking constraints like don't stress the last syllable and don't have two stressed syllables in a row. Yeah. AUDIENCE: How come we go through these four different rules, as opposed to just declaring stress every other syllable? NORVIN RICHARDS: Stress every other syllable starting from the left edge, yeah. Why am I trying to decompose this into constraints like this? I guess in part it's an attempt to constrain the kinds of syllable, of stress systems that we could find, right? So if we let ourselves say things like stress the odd-numbered syllables-- I mean first, stress the odd-number of syllables by itself doesn't actually cover this, even though it's what I said. So it's really stress the odd-numbered syllables unless you're going to stress the last syllable. Don't stress the last syllable. So even just saying that, what I just said is best stated in terms of conflicting constraints. It's like the most important thing is don't stress the last syllable. And then underneath that there's stress the odd-numbered syllables. And then, by decomposing stress the odd-numbered syllables into stress the first syllable, don't have two stressed syllables in a row, don't have two unstressed syllables in a row, we constrain the kinds of stress systems that we could find. So there aren't stress systems that say stress the prime numbered syllables or stress the powers of 2, right? There are certain kinds of constraints that you use for stress systems and not others. And it's one of the kinds of things we're trying to get. A good question. Other questions? Yeah. AUDIENCE: How far would you split into three rules here, right? NORVIN RICHARDS: Yeah. AUDIENCE: First of all going around the two stressed syllables in a row, so I'm not sure on the second one now. NORVIN RICHARDS: Yeah. AUDIENCE: And don't stress the last syllable, like going to the three dot three. So I dictate, my mind goes very close somewhere. NORVIN RICHARDS: Yes, yes, yes, I see what you mean. So we could ask ourselves, what would happen if you had-- and this is either a virtue or a flaw of this way of talking about it, what would happen if the second thing there, don't stress the last syllable, were the first constraint, or if it were the third constraint, or if it were the last constraint. We get to ask ourselves, do we get different systems if we mess with the constraints that way? And this is indeed the kind of thing people do when they're doing this kind of work, you know. Here's another stress system. This is Passamaquoddy, in which the rules go, stress the first syllable, don't stress the last syllable. Those are the most important things. You can see that those are true in all of these examples. And don't allow two unstressed syllables in a row. And then down at the bottom of the list is don't allow two stressed syllables in a row. And it's at the bottom because, well, Passamaquoddy has two stressed syllables in a row all the time. Impressionistically what's going on in Passamaquoddy is you stress the first syllable, and you stress every other syllable counting backwards from the end of the word. So "dirt" is "tupqan," and then "in the dirt" is "tupqanok." So you get stress on the first two syllables, more stress actually on the second one than on the first. I wish that this were not the Passamaquoddy stress system, because I'm trying to learn Passamaquoddy. And it means that if you're trying to say something like "Let's walk around on top," you have to ask yourself, OK, is there an odd number of syllables in that word or an even number, because that's how you know whether to say "tehsahqapasultine," with stress, main stress on the next-to-last syllable, with two stressed syllables at the beginning, or not. Yeah, so it's a pain. Yeah, or here's Tauya, where, again, we could decompose this. This is not unlike Passamaquoddy, except you're stressing the first syllable. And you are stressing the last syllable. And then you are stressing every other syllable counting backwards from the end. So you end up with bi-syllables having two stresses, unlike in Passamaquoddy, where there's only one. That's one kind of stress system, these kinds of stress systems where you have repeated stresses. There's another kind of stress system that's going to have to include constraints that make reference to particular morphemes. So I want to give you an example of that. The classic example that I know of is from Russian. So I'm going to show you some Russian data. English has examples like this, too. We've talked about it before, the fact that "electric" has stress on its second syllable, but that when you add the suffix "ity," you do various interesting things. You change the "k" sound at the end of electric to an "s," so you get "electricity." But also you shift the stress, so it's not "electricity," it's "electricity" with stress on what was an "ik" and is now an "is." For Russian, the basic stress rule goes something like this. Russian morphemes get to indicate whether they want stress or not, and where they want the stress to be. And the basic rule goes, stress the first lexically-accented syllable, that is, the first syllable that is marked as saying, hey, I need stress. If there aren't any lexically-accented syllables, then you stress the first syllable of the word. Russian stress is quite intimidating if you're trying to learn Russian. So if you're interested in trying to learn Russian, please try to remember this rule that will save you a lot of heartache. So here are some examples. The word for town, "gorod," is an unaccented word. And that means that it has stress on the first syllable, because the rule is stress the first syllable if there aren't any lexically-accented syllables. And the dative singular suffix, "-u," that's showing up for this class of nouns, is also lexically unaccented, so it doesn't do anything to the stress. So "gorod" is the word for town, and "gorodu" is the dative singular. And again, don't worry about what dative singular means. It's a thing. It's a marker you can put on nouns. But the dative plural suffix, putting it in a different form, is a lexically accented stress. It does require stress. And so stress shifts to that particular suffix. So you get initial stress because there's no lexical accent in gorod and gorodu, but if you add the data plural suffix, you get "gorodam," with the stress shifting to the end, because this is a suffix that demands stress. There are also nouns that want stress on a particular syllable. So the word for nut, which is "orex," wants-- has lexical accent on its second syllable. And that means that, no matter what you add to that noun, the stress is not going to shift. So the word for nut is "orex." And if it's dative singular, if it's dative plural, you still get stress on that second syllable, because the basic rule is stress the first lexically-accented syllable. So in the last example, the dative plural of nut, the stress is still in that second syllable. It's "orexam," even though, as we've seen, "am" attracts stress to itself. It only attracts stress to itself from nouns that don't themselves have lexical accent, yeah. There, now you know more about that-- I tried years ago to try to learn Russian. And this was one of the things that made me give up. It was like the stress just doesn't make sense to me. I don't understand it. But now you know, so and I do too. I should go try to learn it again. So one way to think about this is you say a Russian morpheme can have a lexical accent. As part of its specification, it can have a little note saying, hey, I want stress to be here. And then we need constraints that say things like, there's only one stress in a word. You want to stress lexical accents, and you should put your stress as far to the left as possible. So in a word like town, there aren't any lexical accents. And so the only relevant constraints, I guess, are the first and the third. There should be one stress, and it should be as far left as possible. So it's at the beginning, "gorod." For "gorodam," the dative plural of town, there can only be one stress. You want it to be as far left as possible. But there's a lexical accent on dative plural suffix. And so you put it on "am." that's as far left as you can go, yeah. So the last constraint doesn't get to do very much. And then for nut, there's lexical accent on the "ex," on the second syllable of the noun. And because you're putting stress as far left as possible, that's where the stress is always going to go, no matter what suffixes you add. Yeah, that's the basic picture. OK, we will do a little bit of Kashmiri and then we'll stop, I think, for today. This actually came up when we were talking about cucumbers. There are languages that have stress systems that care about what's called syllable weight. That is, they care about whether vowels are long or short and also about how many consonants there are after vowels. And Kashmiri is a classic example of this. And I'll show you a little bit of this and we'll see whether we can find a good stopping place somewhere in the middle. Here are some Kashmiri words. There are lots of Kashmiri words that have initial stress. Anybody here speak Kashmiri, language of Kashmir? OK. So lots of words have initial stress. Basically, initial stress is what you get if nothing else is happening. But if there's a long vowel in the word, even if it's not the first syllable of the word, then the long vowel will get stress. So that verb, to finish, has a long vowel as its third vowel, and that's where the stress goes in that word. A long vowel will attract stress except when it's final. So if it's the last vowel in the word, it doesn't get stressed. So the word for book has two syllables. And the second syllable has a long vowel. But the first syllable gets the stress, because Kashmiri like several other languages we've looked at, doesn't like stressing final syllables. If there are two long vowels, then you stress the first one. And another kind of thing that attracts stress is a vowel that's followed by two consonants. So here's the word for "Friday," where the second vowel is followed by an "r" and a "v." It's the first example, I guess, that we've had of a vowel that's followed by two consonants. And a vowel that's followed by two consonants also preferentially gets stressed. It acts like a long vowel. If you have both a long vowel and a vowel that's followed by two consonants, then you prefer the long vowel. So you're getting that in here in door. In Friday, you're not getting stress on the long vowel. That's because the long vowel is the last syllable of the word, and the more important thing is don't stress the last syllable. So we can fruitfully think of Kashmiri stress as a bunch of ranked constraints, again, which say things like don't have more than one stress. Don't stress the final syllable. Stress your long vowels. Stress your vowels that have multiple consonants after them, and then put stress as far to the left in the word as possible. So stress long vowels, we got that in that word that had a long vowel as its third vowel. But we also saw that outranking stress long vowels is a constraint that says don't stress the final syllable. So in this word for "book," there's a long vowel in the final syllable but you don't stress it, because it's final. And avoiding final stress is more important than stressing long vowels. There's a rule that says don't have more than one stress. So you like stressing long vowels, but if you have a word that has multiple long vowels in it, you only get one stress. You don't get two, yeah? And the general rule is, put stress as far to the left in the word as possible. So when you have two long vowels you stress the first one. You can't stress them both. You also don't stress the second one. You stress the first one because you want stress to be as far to the left as possible. That's also why, when there are no long vowels, and no vowels with multiple consonants after them, you get initial stress. And then finally, stressed vowels with multiple consonants after them, that's got to be in there somewhere. And it's got to be below stressed long vowels, because when you have both a vowel with multiple consonants after it and a long vowel, you stress the long vowel and not the vowel with multiple consonants after it. So here's another place where it's possible to describe Kashmiri stress in several paragraphs of prose, in which you say things like, well, Kashmiri stress, it's on the first syllable unless there are long vowels or vowels with multiple consonants after them, except when those are final or on a Tuesday in a month beginning with "m." It would be quite long and complicated. If we describe Kashmiri stress this way in terms of ranked constraints, the constraints themselves can all be pretty simple. They can be things like stress your long vowels. That's a pretty natural thing to do. Or avoid final stress. Or put your stress as far to the left in the word as possible. And all of the action is figuring out what order are these constraints in, which are the most important ones and which are less important. And by ranking them that way, you end up with this quite complicated array of data, which we've now gone through quite quickly. But you'll have the slides. So you'll have a chance to go through this in your own time. Are the questions about Kashmiri? Since we have a few more seconds here, we can talk about Kashmiri in more depth if people would like, or anything else. As I said at the beginning, I could hear people talking about the problem set. If people have questions about the problem set, we can talk about them. Yeah. AUDIENCE: The chapter felt like-- so is, like, for like English. NORVIN RICHARDS: Yeah. AUDIENCE: Is there like a system like this? NORVIN RICHARDS: Oh. AUDIENCE: Or is it just kind of like-- NORVIN RICHARDS: So, yeah, here I am talking about stress systems. And I've talked to you about Kashmiri and I've talked to you about Passamaquoddy. I've talked to you about various languages, Russian, and I've strenuously avoided talking to you about English. English stress is quite complicated and messy. There are books about it. It's partly, like so much about English, the product of several different languages with different stress systems interacting with each other. So there was a Germanic stress system that English inherited from its Germanic substrate. And then the French invaded in 1066 bringing their own stress system. And one of the results of that is that English stress is the kind of thing you can write books about. So there's a reason I haven't tried to do English stress on the slide. You think Kashmiri is complicated, yeah. It's, yeah, it's complicated. It's complicated in interesting ways. We have various places, for example, where the difference between a verb and a noun has to do with stress. So we have verbs like "perMIT" and nouns like "PERmit," right? Get a permit to do something. And there are a number of pairs like that, which is one of the kinds of things that's interesting to think about. That's not something that we inherited. And so when people are trying to figure out how-- we talked earlier about the possibility that nouns and verbs could have different phonologies, and there are times when something like that seems to be happening, like in that kind of example. So when you write your book on English stress, that's one of the chapters you have to write is the difference between permit and permit. Yeah. Yeah. AUDIENCE: You said, is there like some method we can use to define the like all the rules that were in this? Like adjusting rules to see what the basis is. NORVIN RICHARDS: So what you want to do is look systematically at each vowel. Here's what I would do in your shoes. Look systematically at each vowel and make yourself notes for each vowel about the environments that they show up in. So, say you start with the vowel "a." Make yourself notes about all of the things that are around "a." Maybe you could start with the hope that the relevant things are the consonants on either side of it, or whether it's at the beginning or the end or the first syllable or the second syllable or whatever. Make yourself notes for each word that has an "i" in it, where the "i" is showing up. And then look at your notes and hope that something jumps out at you. That's the only thing I can think of to suggest. Sorry, that's probably what you're doing. But if that's what you're doing, you're doing well. Are there other questions? OK, all right, so I'll see you on Thursday when we will start talking about syntax. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_20_Semantics_Part_4.txt | [SQUEAKER] [SQUEAK] [SQUEAKIN] NORVIN RICHARDS: I'm going to start today with a little bit of a review of something that we talked about sort of earlier when we were doing syntax because it came in handy. At a certain point, you guys were pushing me with questions about the fine details of syntactic structures. And I wasn't able to answer your questions without making reference to another kind of test for syntactic structure, which was binding theory. And so I showed you binding theory back then, but I'm going to go over it again in a little more detail. And we'll talk about some things that we didn't get a chance to talk about last time. And then we'll go on from there. Binding theory is the theory that's meant to handle facts like the facts that are on this slide, namely, the fact that, if you say that these are both grammatical sentences, but that they mean different things. The first one, "Susan likes herself," you could paraphrase as "Susan likes Susan." "Herself" has to refer to Susan. And possibly relatedly, "Susan likes her." You don't know who "her" is. But you do one thing, which is that it is not Susan. That's not a general fact about "her." It's a fact about "her" in this sentence. So if I said "I like her," then "her" could be Susan, whoever Susan is. But if it's "Susan likes her," then that "her" cannot be Susan, or at least not the same Susan as the subject, yeah? Binding theory is meant to capture facts like this. And these starred sentences incorporate some of the things that I just said, not only-- so these little letters, these subscripts that people put on these words, are meant to indicate things about what can corefer with what. So "Susan likes herself," the first sentence, is OK. And the third sentence is starred. Because if you say the string "Susan likes herself," then "Susan" and "herself" have to refer to the same person. And we indicate that by giving them the same subscript. So "Susan" sub a likes "herself" sub a, that's OK. But "Susan" sub a likes "herself" sub b is not OK. The two subscripts have to be the same. And with a pronoun, it's the other way around. They have to be different. OK so far? Nobody is sitting there quietly thinking, "No, wait. I say 'Susan likes her,' and I mean 'Susan likes herself' all the time. No wonder nobody ever understands me." Nobody's having that experience? OK, good. So, you know, looking at this without thinking about it too hard, we could think, OK, maybe pronouns-- we'll use "pronoun" to refer to words like "her"-- cannot corefer with anything in the sentence, where "corefer" means refer to the same person. And anaphors, which is a name for fancy words like "herself," must corefer with something. So "Susan likes herself, herself" needs to refer back to Susan. You have to give "herself" something for it to corefer with. And when we did binding theory before, we already got this far. A word like "herself" has this special property that it has to corefer with something else in the sentence. But interesting fact, it can't just refer with anything. And this is why there's a theory. You have to do some work to figure out which kinds of things it can refer to. So "Susan likes herself" is fine, but "Susan's father likes herself," where "herself" is Susan, is no good. One hopes that Susan's father likes Susan, but this is not the way to say it. So "herself" can't refer back to Susan in a sentence like this, which knocks out some of the simplest, easiest theories you might have had from the first batch of sentences on the earlier slide. You might have thought, well, you know, anaphors, they need to refer back to something. And you might even have thought they have to refer back to something that's earlier in the sentence. They're kind of a pointer that points back to something that's before them. And what we're seeing is that it's more complicated than that. They can't just pick out anything. There are some restrictions on what they can refer to. We have to understand what those are. There's a classic story, which we've already been through-- I'll just show it to you again-- about why it's OK to say "Susan must like herself," but bad to say "Susan's father must like herself." The classic story goes like this. What we're trying to do is come up with a structural relationship between "Susan" and "herself" that holds in the tree in the left, but not in the tree in the right. We're talking about structural relationships because, well, your first hope, which might have been, yeah, "herself" has to refer to something that's been mentioned before, the way you might have hoped language was designed, there are no languages like that. There are, instead, languages like this that apparently care about trees when they're deciding which things can refer to which other things. And so we're going to try to develop a structural distinction between the relationship between "Susan" and "herself" in the tree on the left and the relationship between "Susan" and "herself" in the tree on the right. As I said, we've talked about this before. We're just reviewing it. The story that people have classically come up with makes use of the fact that Susan is more deeply buried in the tree on the right than she is on the left. So there's a relation called c-command. It's a relation that can hold between two nodes in a tree. It says alpha c-commands beta if every node that dominates alpha dominates beta. So you c-command basically the thing that you were merged with and everything that that dominates. So in this tree, the NP "Susan" c-commands "herself." The way we find that out is we ask, what are the nodes that dominate the noun phrase "Susan"? And there's only one, that TP that I've put up there. And that TP also dominates "herself," and so "Susan" c-commands "herself." Does "herself" c-command "Susan"? No. Yeah, you're all shaking your heads appropriately, right? And the way to find that out is to ask, what are the nodes that dominate herself? And they include that TP, but somebody name another one that doesn't dominate "Susan." AUDIENCE: VP. NORVIN RICHARDS: VP, yeah. Or T bar, yeah? So there are at least those two which dominate "herself," but don't dominate "Susan." So "Susan" c-commands "herself," but "herself" doesn't c-command "Susan." And what we're seeing is that maybe an anaphor needs to refer to something that c-commands it. So in this sentence, "Susan" must like herself," "herself" is referring to Susan. And "Susan" c-commands "herself," as opposed to the other tree, "Susan's father must like herself," which is no good. What's wrong with it? Well, what's wrong with it is, on this story, there are multiple nodes dominating "Susan." There's TP, but also NP and DP. And some of those don't dominate "herself." So "Susan" doesn't c-command "herself." Binding theory makes use of this. Let me just call your attention-- maybe your attention is already firmly fixed on this-- to how weird this is. I just alluded to this, but let me just say it again. I'm showing you a bunch of facts about English. And different languages do have different anaphors that behave in different ways. But here's something no language ever does, as far as I know. There are languages in the world where an anaphor can refer to anything that was said earlier. There are no languages in which both "Susan likes herself" and "Susan's father likes herself," in which they're both OK. There just aren't any languages like that in which the important thing is does the noun precede the anaphor, which is, so first of all, interesting. There aren't any languages like that. But second of all, weirdly, surprisingly, interesting, let's get ourselves in the frame of mind where we can be impressed by a fact like that. Because, look, imagine what you were like when you were a baby, right? Here you are. You've just been born. You're hearing people speak, right? They're saying sentences in this language that's going to be your native language, and you're trying to learn it. You're trying to figure things out about it. What's the one thing that you know? You hear people talking. And they're saying things like-- here, I'll put a sentence on the board in a language that I hope none of you speak. So here's a sentence. You hear one of your parents saying, [NON-ENGLISH SPEECH].. Maybe they write it down and show it to you. So you know, for some reason, that it's 1, 2, 3, 4, 5 words long. What else do you know about that sentence? Do you know which words in that sentence c-command which other words? Good lord, no. Yeah? Do you know which words in that sentence precede which other words? Yeah. So you know that [NON-ENGLISH] precedes [NON-ENGLISH].. You know that. But you don't know whether [NON-ENGLISH] c-commands [NON-ENGLISH] or not. So the one bit of information that you have immediately from the get-go, which words come before which other words, you ignore. So languages don't ever make use of that when they're doing binding theory. So you would have thought this one sort of easy to access bit of information that all of us have from day one, no matter, you know-- don't know anything else about your language. You at least know which things come before which other things. Binding theory never, ever cares about that. It only cares about c-commands, which is this thing that you have to know something in order to understand. It's as though our brains are designed to build language in some ways and not others and to ignore certain sources of data which are very, very fruitful. You have lots and lots of data about which things precede which other things, but you don't care. It's kind of interesting. Yeah. AUDIENCE: If, for some reason, we did care about the order of the words that we're saying, would that create ambiguity? Because some languages like, German, half of it is head initial and half is head final. And so you'd be confused. NORVIN RICHARDS: So which things proceed and follow which other things? I mean, there are ambiguities in binding theory. We've been talking about simple sentences so far, but you can say things like-- if anybody is wondering, this [NON-ENGLISH]. It means "Is drinking the man the water," so "The man is drinking the water." Sorry, I just told you that so you wouldn't have to worry about that. There are ambiguities in binding theory. I've been talking about sentences where there aren't ambiguities, but you can say things like "Susan told Stacy about herself," where probably the easiest reading for that is that Susan told Stacy about Susan. But if Stacy is unclear about herself, there's something Stacy doesn't know about herself, Susan could tell Stacy about herself and have that mean Susan told Stacy about Stacy, I think. Am I making that up? I think that's true. This is just ambiguous. So here's a sentence. So we're OK with ambiguity usually. There are plenty of places where, if we were to draw a tree for this, we'd want to draw a tree where "Susan" and "Stacy" both c-command "herself." And so it has a choice of which kind of thing it wants to refer to. All it cares about is which thing c-commands it, but it doesn't care about which thing precedes it in any language on Earth. It was kind of interesting. Yeah, go. AUDIENCE: I was also thinking about you could have something like "Carrie likes Mary herself." Maybe Mary played the evil character on TV and people don't like-- she likes Mary herself. NORVIN RICHARDS: Right, right. AUDIENCE: But it's like, all the other people, they hate Mary. But she, herself, she likes-- NORVIN RICHARDS: She-- yeah. You're raising-- actually, I'm glad you asked that question because it raises a good point, which I should have made. And so thank you for raising it, which is that languages use "herself," "himself," "myself," all these "-self" kinds of doohickeys, for a bunch of different things. We're going to be concentrating on the "herself" that sits where a noun phrase would normally sit and refers back to another noun phrase. But there is this other, sometimes called, an intensive use of reflexives. And I can't spell today-- intensive, where you say things like "John, himself, fixed the computer," or, for that matter, "John fixed the computer himself," where that means something not so different from "John fixed the computer." It means something like "John fixed the computer, and no one else did." So "Susan's father must like herself." We were talking about things that "herself" can refer to. Or "Susan told Stacy about herself." We're talking about things that "herself" can refer to. This would be an OK sentence if you changed "herself" to something else. It would mean something else. But "Susan told Stacy about Mary," that's also a sentence, right? So "herself" is just a noun phrase, noun phrase with special properties, that's sitting where noun phrases go. If you replaced "himself" here with something else, so if it were "John, Mary fixed the computer," you would no longer be speaking English. So this is a different construction, the construction that you're raising, a really interesting one which I'm now going to ignore for the rest of today. But that's what this is. Yeah. AUDIENCE: Well, I'm bringing this again. NORVIN RICHARDS: Oh, no. I'm ignoring you. No, sorry. Go ahead. AUDIENCE: Is that a type of movement? NORVIN RICHARDS: Sorry? AUDIENCE: Is that another example of movement? Or-- NORVIN RICHARDS: Which one? Oh, the fact that himself can be either here or here? AUDIENCE: Yeah. NORVIN RICHARDS: Maybe. Ah, we have to be careful. There are various kinds of-- so himself, here, it's kind of like an adverb, right? It's modifying the way in which he fixed the computer. And just as you can see things like "John quickly"-- I don't know why I switched to John here. Let's make Susan do this, too. Just as you can say Susan quickly fixed the computer, or Susan fixed the computer quickly, you could do that with movement. Or you could decide-- we're learning something about adverbs, that adverbs, because it's an adjunct maybe, there are more options about where it can go. The rules about where exactly it goes are not as strict. The clean arguments for movement are cases where we know for a fact that something should be here, and yet it's over here. When we do "What did he devour?" that's what we're doing there. Yes? AUDIENCE: But doesn't the "you" in that sentence kind of change depending on where you put the "herself"? NORVIN RICHARDS: Oh, sometimes. Yeah. AUDIENCE: The first one, you could have "herself" after Susan if you're implying that it's surprising that Susan, herself, was the one who fixed the computer. NORVIN RICHARDS: Yeah. AUDIENCE: Or as you put it at the end, it's you just say that it was Susan that-- NORVIN RICHARDS: Yeah, so these might be different kinds of adverbs. That's a very good point. Yeah, that's a very nice point. So if I say "Susan, herself, fixed the computer," we mean Susan is the CEO of the company. You would have expected her to hire someone else to do it, but she did it herself, whereas "Susan fixed the computer herself," it doesn't mean necessarily that it's surprising that she did it. It just means that she did it and no one else did. Yeah. So they're related meanings, but separate. So we may want to distinguish two different types of intensive modification. You're absolutely right, nice point. Other nice points people wish to make? Yes? AUDIENCE: So to clarify, the reason this tree is not valid [INAUDIBLE] is that "herself" is looking for something to bind to. NORVIN RICHARDS: Yeah. AUDIENCE: And so it needs to be binded [INAUDIBLE] binds to NP. It binds "Susan," and it binds "Susan's father." And it can't bind to "Susan" because of the c-command constraint. It can't bind to "Susan's father" because "father"-- NORVIN RICHARDS: Because wrong gender. AUDIENCE: Right. NORVIN RICHARDS: Yup. AUDIENCE: [INAUDIBLE] himself. NORVIN RICHARDS: Yup. Yeah, that's it exactly. Yeah, so the idea is-- so we weren't asking why can't it refer to Susan's father. You just asked that question, and I think you answered it perfectly. "Herself" can't refer to "Susan's father," which does c-command "herself," because Susan's father is male. Wake up. And it can't refer to "Susan" because "Susan" doesn't c-command it. Is it back? Good. Yes. Yeah. AUDIENCE: So then to clarify, if the sentence was, "Susan's father must like himself," that's fine because "Susan's father" is the NP that's c-commanding. NORVIN RICHARDS: Right. And that's my intuition about the sentence. Is that yours? And that would be OK. It means Susan's father has a healthy self-regard. Yeah? OK, cool. So binding theory, I introduced you to binding theory as a test for structure, right? So once we are willing to trust binding theory at least a little ways, we can convince ourselves that we have a new way of finding out things about structure. We know we have the right structure if we have the right c-command relations to get the binding relations that we seem to find. So for example, when we're thinking about sentences like this, "Susan told Stacy about herself," we're going to need it to have a structure in which "Susan" and "Stacy" both c-command herself. And so when we were talking before about complicated verb phrases, binding theory is the kind of thing that people use as a probe into the structure of complicated verb phrases. OK. All right, so one part of binding theory is going to say anaphors, these are words like "herself" or "myself"-- in English, they all end with "-self." They need to be c-commanded by something that corefers with them. And then we have a short name. I hope I do this on the next slide. I'm not sure if I do or not. We have a short name for to c-command something that you corefer with, we call that bind. So we say that, in "Susan told Stacy about herself," "herself" needs to be bound. And there are two noun phrases that could bind it, "Susan" and "Stacy." And so we need a structure for this in which those both c-command herself, and either of them could corefer with herself. Yeah? AUDIENCE: Do all anaphors necessarily end in "-self," or are there cases where pronouns act as anaphors? So for example, "John said he went to the store," where "he" is referring to John. NORVIN RICHARDS: Yeah, that's a very nice point. So, so far, you may have noticed all of my examples have only been one clause long. And we're going to make things more complicated in just a second. But here, just to get started on making things more complicated, we've been talking about the fact that you can say, "Susan must like herself." If we make this an embedded clause, so if we say "Mary thinks Susan must like herself," well, "herself" can refer to Susan. But it can't refer to Mary. So "Mary thinks Susan must like herself." And that's not going to be because "Mary" doesn't c-command "herself." It does in the kinds of trees that we're drawing. So you are finding a complication, which I was planning to hide for a few more slides. There's an additional requirement on anaphors. They don't just need to be c-commanded by something that corefers to them. They don't just need to be bound. They need to be bound by something that's sufficiently close. And if they're in different clauses, they're not sufficiently close. And we'll talk about that soon. You'll get to look back with nostalgia on this little interchange in just a few minutes. Raquel? AUDIENCE: This isn't really a question, but I think it's interesting that coreference could create a presupposition. "The magician made herself disappear." It's like, OK, well, suddenly the fact that the magician is female is a presupposition that's there. But it could be wrong, and you couldn't star it necessarily because you're not sure. NORVIN RICHARDS: Yeah. No, I take your point. So I take your point to be, if you say "The magician made herself disappear," or "The magician made her father disappear," whether you use an anaphor or a pronoun, "magician" doesn't have any information about gender in it. But a fact about-- a thing about pronouns and reflexives in English is that they carry presuppositions about the gender of their reference. That's absolutely right. Yeah. Yeah, that's quite true. Yup. OK. All right, so-- oh, good. I did say it on the next slide, bind. This is just a name for c-command and "corefer with" just because we get tired of saying c-command and "corefer with." We say alpha binds beta if alpha c-commands and corefers with beta. So in "Susan must like herself," "Susan" binds "herself." "Susan" corefers with "herself," and it c-commands "herself." And anaphors need to be bound. And thanks to Faith, you know that, pretty soon, it's going to be more complicated than that. They need to be bound by something that's sufficiently close to them. Yeah. AUDIENCE: Speaking about behavior of c-command, you mentioned earlier in the semester how normal pronouns or-- yeah, normal pronouns are not A c-commanded. NORVIN RICHARDS: Yeah, so we're going to do pronouns next. Yeah. AUDIENCE: OK. NORVIN RICHARDS: Yeah, that's right. We're going to get to that. Actually, I think we're doing it next. Anaphors must be bound. OK, sorry, one more thing. There are two kinds of anaphors. There are reflexives, which are the ones we've been concentrating on, words like herself and himself. And then there are what are called reciprocals, which have the same behavior. They're words like "each other." So "John and Bill like each other." The meaning of "each other" is much more complicated than the meaning of "themselves," but "each other" and "themselves" have the same behavior in that they are OK in sentences like "John and Bill like each other" or "themselves," and that they are both bad in "John and Bill's father likes each other" or "themselves." So "each other" and "themselves" are both anaphors, and they're both subject to this requirement that they be bound. So "John and Bill like each other" means something like "John likes Bill and Bill likes John," yeah? There's literature on exactly what reciprocals mean because it's complicated. For example, it's possible to say things like, "We piled the books on top of each other." So each other doesn't mean for every pair in the group A relation B and B relation A, right? You might have thought, looking at this, that "John and Bill like each other," to interpret that, what you do is you take all of the pairs you can make out of the group. OK, there's one pair, you know? And what this means is John likes Bill and Bill likes John. But if you say "The books are piled on top of each other," it doesn't mean this book is on top of that book and-- That's not what it means. So there's work on trying to figure out what the heck it means, what reciprocals mean exactly, which we will not get any further into now. OK, this was Joseph's question, pronouns. Anaphors must be bound. Pronouns must be free. It sounds like a political slogan, but it's a fact about grammar, yeah? So pronouns need to not be bound. So "Susan likes herself" is fine. "Susan's father likes herself" is bad. That's the behavior of anaphors, which we're familiar with by now. And "her" behaves the same way only backwards. So "Susan likes her," fine sentence as long as "Susan" and "her" are different people. So "her" can't be bound by "Susan." It can't be c-commanded by "Susan" if "Susan" and "her" are going to refer to the same person. On the other hand, "Susan's father likes her"-- fine. So it's not quite true, but it's almost true that anaphors and pronouns are in complementary distribution with each other. So for a given meaning, you either use the pronoun, or you use the anaphor, OK? OK. More examples-- even more examples. So now, we're getting to Faith's point-- my version of Faith's point, sorry. So "Susan likes herself." We've seen this is OK. "Susan thinks I like herself." Or I have the sentence on the board, "Mary thinks Susan must like herself." "Herself" is trying to get bound, in this case, on the slide by Susan. And it can't be. The kinds of trees we were drawing for this kind of sentence, Susan certainly does c-command herself. I don't think I put the tree on the slide. So let me just create it quickly. "Susan thinks"-- there's going to be a CP here, then null C. And then down here, we have a noun phrase, "I," and a verb phrase like "herself." So if we ask ourselves does-- here's the tree. Did I write the right sentence? "Susan thinks I like herself," yes. If we ask ourself, does "Susan" c-command "herself," the answer is yes. If we ask what are the nodes that dominate the noun phrase "Susan," well, there's only one. It's this TP. And so if we're asking, does "Susan" c-command "herself," well, we're really asking, does this TP dominate "herself"? And yes, it does. So "Susan" c-commands "herself." So it looks as though "Susan" ought to bind "herself." What we're learning here is the complication that Faith made me reveal early, which is that not only do anaphors need to get bound. They need to get bound by something which is close enough. There's a principle. It's called principle A. You can think of A as standing for Anaphor. It says anaphors must be bound within TP. If this were Intro to Syntax, we would now spend weeks, possibly months trying to figure out what the real locality requirement is because it's more complicated than that. But that's close. So the idea here is that "herself," yes, it's bound by "Susan" in this tree. But it's not bound by "Susan" within the smallest TP. Here's the smallest TP that includes "herself," and "Susan" is too far away. So there's a locality requirement. Does that make sense? So principle A says, anaphors need to be bound within the smallest TP that contains them. And then principle B, where B stands for the principle that comes after principle A, says, pronouns must be free within TP, so just as we saw before, that pronouns and anaphors are, so far, in complementary distribution for a given reading. So you can say "Susan likes herself," and that means that you can't say "Susan likes her" and mean the same thing, that you mean that "Susan likes herself." Similarly, we've now seen, thanks to principle A, that you cannot say "Susan thinks I like herself," where "herself" refers back to "Susan." And principle B says, yeah, pronouns have to be free within TP. So "Susan thinks I like her," where "her" refers to "Susan." Fine, it doesn't have to refer to "Susan," right? It can refer to anybody. It can be free regardless of whether it's bound by "Susan" or not, free within TP. Yeah, Faith. AUDIENCE: I may have just forgotten, but with that T there, is the invisible T that's dominated by T bar always necessary? Or could you just say-- NORVIN RICHARDS: This? AUDIENCE: Yeah. And-- NORVIN RICHARDS: And also this? Yeah. You know what? In most of the trees I've been showing you, I've been smart enough to put auxiliaries in the sentences. So I've been saying things like, "Susan will think that I must like herself." And then people don't ask the question that you're asking. But when I drew this tree, I was not smart enough to do that. There does need to be a T because-- why does there need to be T? Well, we have principles like the extended projection principle, which says that TP must have a specifier. And sentences without auxiliaries in them show the effects of that. So it doesn't matter whether there's an auxiliary or not. Yeah. It doesn't matter whether there's an auxiliary or not. You still get the effects of the extended projection principle. So we do want there to be a T because we want there to be a TP. And we don't have any way to make a TP without starting with a T. So you have to merge T with a verb phrase and so on. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yeah. So I mean, here's-- I'm not totally unhappy about the direction this conversation is taking because you're forcing me to talk about syntax, which is fine, even though this is the semantic part of the story. Suppose I was smart enough to use an auxiliary here. So we'll do "Susan will think." And we'll forget about the embedded clause for a second. We've talked about this, I think, that, if you want to ask a yes/no question, the way to do it is to head move T into C. So you ask questions like, "Will Susan think that" blah-di-blah? So bear that in mind. And now, let's switch to the version of this where there isn't an auxiliary. "Susan thinks that I like." If I want to make a question out of that, what I do is I put "Does" here. And I get rid of the S here. So it's "Does Susan think that" blah-di-blah? So there's a story about what's going on here, which says what's in T-- so there are various things that can be in T. There's "will" and "should," "might." And there's also "-s," as in "Mary thinks that I like her." And if you do T to C of "will" or "should" or "might," then you just move them over there. And nothing else happens. If you have a "-s" then there are a couple of things that happen. If you aren't head-moving it, then it attaches to this verb. So you end up with "Susan thinks." And we have to figure out how these two things come together to form a verb. And sadly, it's long enough since we did syntax that I'm unwilling to try to tell you that. There's a big literature on trying to figure that out. And then the other story is, if you're going to do T to C, if you're going to make a question, you're going to take this. It's not going to show up here. It's going to show up up here. So you would have expected this to be "-s Susan think I like herself?" But "-s" is too small to be an English word or something like that. And so you put in this "do." There's a process that is called "do support." You add "do" there so that we'll have a verb to be part of. It's the story that people tell. And so that's why the question version of that is "Does Susan think that I like her?" Can you see why I have been hiding all of this from you all semester? So I've been using auxiliaries instead of giving you verbs that have suffixes on them. This is why, because I didn't want to do this. But it's been fun anyway. But now, it's over, except it's not. Joseph? AUDIENCE: Just a brief question-- NORVIN RICHARDS: Yes. AUDIENCE: --about that movement, you said the extended production principle requires you TP has a specifier. So that means that, because [INAUDIBLE] creating a yes or no question means moving ahead out of TP, you can't move "Susan." Because then TP was the specifier, right? NORVIN RICHARDS: Yeah. Oh, I see. AUDIENCE: Yeah, why [INAUDIBLE]? NORVIN RICHARDS: No, no. Oh, you mean why are you moving the head? Why are you not moving a phrase? AUDIENCE: Why can't you just move the phrase around? NORVIN RICHARDS: There isn't a deep answer to that question. Forming questions involves taking whatever is in T and moving it into C. You'd like to know why we don't move other things, and that's a good question. This is-- AUDIENCE: [INAUDIBLE] to her. And I said, you can't do this because of the shortest path. NORVIN RICHARDS: Ah, yes. I see. Yes. So, yeah, you can't take the verb and move it up there. That's right because that's too long. Why can't you move this? Because it's not a head. It's a phrase. Apparently, we think that only heads can move to attach to other heads. Yeah. And we might want to know why. There are lots of things we might want to know. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Go take syntax. Yeah. Yeah? OK. All right, all of you, you dragged me kicking and screaming into doing more syntax. Shame on you. OK. Right, principle A, principle B, yes? Are people reasonably happy with principle A and principle B? So they say anaphors need to be bound within TP. Pronouns have to be free within TP. Side note, since I just told you universal about anaphors, anaphors only ever care about c-command. They never seem to care about linear order. They don't care about what precedes them or follows them. I should say that there are points of variation between languages. There are languages that have anaphors that need to be bound, but they can be bound longer than this. So there are languages that have anaphors that can be bound outside of the TP that they're in called long-distance anaphors. English doesn't have them, but Mandarin does, for example. So Mandarin, [NON-ENGLISH] is a long-distance anaphor. Principle A is not a parameter that distinguishes languages. It's a parameter that distinguishes anaphors. So Mandarin actually has two anaphors. It has an anaphor that behaves like this. It's just pronounced [NON-ENGLISH].. And then it has another one that is long distance that can be bound from outside its TP. It's just [NON-ENGLISH] by itself. Those of you who speak Mandarin can be thinking about that. There are languages that have more anaphors than that. Korean, I think, holds the record. It has four different anaphors with slightly different behaviors. Yeah. AUDIENCE: Do languages that have long-distance anaphors also always have this type of anaphor, or does it vary? NORVIN RICHARDS: As far as I know, every language has this type of anaphor. And long-distance anaphors are an additional luxury that a language can allow itself or not. I don't think there are any languages that only have long-distance anaphors. Yeah, good question. Yup? OK. Where are we? OK, yes. So we have principle A. We have principle B. Anaphors need to be bound within TP. Pronouns have to be free within TP. And let's stick to English again. Now, let's consider these sentences, "She likes Susan" and "Her father likes Susan," with these subscripts and these grammaticality judgments, which I hope everybody agrees with. So "She likes Susan"-- fine sentence, but she and Susan can't be the same person. Does either of our principles handle these facts? That's a leading question if there ever was one. No, they do not. So first of all, principle A is obviously going to be useless here. There are no anaphors in these sentences. So forget principle A. Principle B, pronouns need to be free within TP. Well, is she free within TP in "She likes Susan"? Is there a noun phrase that c-commands "she" and corefers with it? Is "she" free? Yes. She is free. Shall I draw a tree? What's the sentence? "She likes Susan." Someday, I'm going to ask somebody to make me a stamp, a chalk stamp, you know, that just has this tree, TP, T bar, [INAUDIBLE].. Wham. And then there it would be. That would be great. "She likes Susan," yeah? Here's the pronoun. Pronoun needs to be free within TP. That is there had better not be anything c-commanding "she" that corefers with "she." Does anything c-command "she"? I guess maybe T bar. So T bar is dominated by all the same nodes as noun phrase. But T bar doesn't refer to "she." It doesn't refer to "Susan." So "she" is free, yeah. So our principles don't handle these facts. There are at least two things we could do. One would be to say, ah, principle B, we were wrong to confine principle B to pronouns. So it's really just a distinction between anaphors and everything else. So the problem with "She likes Susan" isn't "she." It's "Susan." So "Susan" also needs to be free within TP. And then the story would be "Her father likes Susan." "Her" is embedded inside "her father." And so it doesn't c-command "Susan," and so there. That version of principle B would cover this. But pronouns really are the opposite of anaphors. They need to be free within TP. So "Susan thinks I like her," where "Susan" c-commands "her" and corefers with "her," so "Susan" binds "her," is fine. Because although "her" is bound, it's bound by something that's far away. And that's all right. But in "She thinks I like Susan," we're seeing that names do not behave like pronouns. "Susan" requires absolute freedom, freedom everywhere. Pronouns just need to be free within a certain domain, within TP let's say. But a name like "Susan" needs to be free everywhere. The fact that "Susan" is bound by "she" in that second example, "She thinks I like Susan," means that the sentence is bad. It doesn't matter that they're far apart. So we make up a new principle-- it's called principle C-- and, also, a new term, R-expressions. R-expressions means things that are not anaphors or pronouns. It stands for Referring expressions, which is an unfortunate name because they don't necessarily refer. "Susan" does. It refers to Susan. But anyway, that's all it means. Things that are not anaphors or pronouns. So everything that isn't subject to principle A or principle B is subject to principle C. Every noun phrase that isn't subject to principle A or principal B is subject to principle C, which says our expressions must be free everywhere. Our expressions must be free, and everywhere they're in chains. Yeah, OK? All right. And then this is the fact that I made a big deal about before. Even though which thing precedes which other thing is the kind of thing that every infant knows as soon as they can hear, words and segment words, which infants are eerily good at, even though-- if you were going to design a computer, a language for computers to use-- I am not a computer person. You may have picked up on this. Even I think it would be easier for me to get a computer to figure out which thing linearly precedes which other thing than it would be to get a computer to figure out which thing c-commands which other things. That's a much more complicated task, it seems to me, not that I know anything about this stuff. And yet all of us go for the more complicated task. We're built that way. We can't help it. So language doesn't care which thing precedes which other thing. It cares which things c-commands which other thing, always, in every language, over and over again. Yeah. AUDIENCE: How do we draw a tree when the subordinate phrase is [INAUDIBLE]? NORVIN RICHARDS: Oh. Well, we could do different ways. Where will I put this? I'll put it up here. So maybe you would start by drawing the TP first. So "Susan read a book," right? Do that for "Susan read a book." And then we need somewhere to hang "while she was eating." And we need it to be some kind of adjunct. So it's not in the same kind of relation as anything else. What am I trying to say? It's not selected by anything. There's no particular reason for it to be anywhere. So there are various ways we could do that. Notice, for example, that if this were an embedded clause, so if it were "I think that"-- so here's a C. It's going to be a CP-- sorry, a CP. So if we were doing, "I think that Susan read a book," the embedded clause "that Susan read a book" would look like this, right? It would have a CP here where I can just connect these. We have a CP with a C that that's got this TP as its sister. And if we were going to add this embedded clause, this adjuncty clause to the beginning of the embedded clause, while she was eating, let's think about places to put it. If it's going to be before "Susan," we could ask, does it go here? Or does it go here? So "I think that, while she was eating, Susan read a book" is, I think, something that we can say. "I think, while she was eating, that Susan read a book." I think does not mean the same thing. There's something wrong with that. So this is at least one place that it can go between the C and the TP. And given that it can go between the C and the TP, maybe the easiest thing to do would be to just make it an adjunct on TP. We'll duplicate the TP node. And we'll have a CP here where I won't do the insides of it, but that's where you would put "while she was eating." Does that make sense? Yeah? OK. So that's binding theory. I think I flagged the fact there are many, many things to say about binding theory. That's part of it. Reasons to be happy about binding theory, I first showed you binding theory as a probe into c-command relations, but there's so much more. There are lots of things we can do with binding theory. Let me show you one. Here's a sentence. "Mary decided to leave." Yeah, fine sentence. How many TPs are in this sentence? well there's at least one. What's "to leave"? "To" isn't a preposition here. This isn't like "Mary went to Toledo," yeah? In fact, this "to" is in complementary distribution with a bunch of different things. You can't say, for example-- you can say "Mary decided to leave." You can't say "Mary decided will leave" or "should leave" or, for that matter, "Mary decided left." These are not sentences. So these are all bad, which makes people think that maybe "to" is the same kind of thing as "will" and "should." So here's a spot where you can only have one of these things. You can have "will," or you can have "should," or you can "have to." And so people think-- yeah. AUDIENCE: Is "to" not joined with the verb itself, though, as part of the infinitive? NORVIN RICHARDS: I don't know. This gets into questions about whether you believed your English teacher when she or he told you that you cannot say things like "Mary decided to immediately leave, but we decided to boldly go where no one has gone before." There are languages in which the marker of the infinitive is a morpheme that's attached to the verb. Yeah, you're absolutely right. But in English, it's not so clear that it is. There's stuff like adverbs that you can put between the "to" and the verb, at least I can. Yeah? All right, Joseph? AUDIENCE: Is the "to" shorthand for "that she should," "would," or-- NORVIN RICHARDS: Well, so this sentence means more or less the same thing as "Mary decided that she would leave." So interesting fact, you can't have "that" if you're going to have a "to." Yeah. If you have a "that," then you cannot have a "to." If you have "that," then you can have all these other things that would be here if "to" were not here. You could have a "will," or a "should," or a past tense, or all of these other things. That's the relation that we talked about in terms of selection. We said C selects T. T is selected by C. So these are all reasons to think maybe that "to" is a T. There are two TPs in this sentence. There's "Mary decided to leave." That's one TP. And that "to leave" is another TP, OK? Are you buying this? You should. Yeah. OK, so maybe there are two TPs. But if there are two TPs in this sentence, we have this principle, the extended projection principle, that says every TP must have a specifier. But we only seem to have one subject, "Mary," who's doing the deciding. Now, there's a classic answer to this question, which says English is a language with many pronouns, and one of them is invisible. And the invisible one is the one that is the subject of "leave." It's the thing that's satisfying the EPP here. So it isn't that TP doesn't have a specifier in "to leave." That TP does have a specifier. It's just an invisible specifier. When this theory was first offered, people reacted the way you might expect them to, sort of the way some of you possibly mentally are: "I mean, come on, you're just trying to avoid having to say that the EPP is not right or parameterize the EPP to different kinds of positions." But actually, binding theory gives us some reasons to believe that there is such a thing as PRO. And let me show you some of the arguments. So in this case, that's a story that says, no, the EPP is being satisfied in the embedded clause by an invisible pronoun that refers, in this case, to Mary. Here are some data. "John promised Mary to defend himself." "John promised Mary to defend herself." "John told Mary to defend himself." "John told Mary to defend herself"-- sentences with grammaticality judgments. I'm going to copy these sentences onto the board because I want to do various graphic things with them. And while I'm copying them, I'll ask you to look at those sentences and ask yourself whether you get these judgments. So "John promised Mary to defend herself"-- "himself." Did I put those in the wrong order? Yes, I did. And then "John told Mary to defend himself"-- "herself," where the middle two-- so this is bad and this is bad. Do people get these judgments? Is this true? Am I just making this up? Nobody wants to protest? OK. So I told you before we're going to use binding theory to find things out about how far away things are, which things c-command which other things, now that we have this theory that we think, we hope, is more or less reliable. We know, when we look at an anaphor, it had better have an antecedent that's close enough to it. And we think that close enough is more or less within the same TP. Forget it, now, that I've done all that talking about anaphors. Forget about the anaphors for a second. If I say "John promised Mary to leave," who's going to leave? AUDIENCE: John. NORVIN RICHARDS: John. And if I say "John told Mary to leave," who's going to leave? AUDIENCE: Mary. NORVIN RICHARDS: Mary, yeah? So the understood subjects of these embedded clauses, the PRO if there's a PRO, is going to refer to John when the first verb is promised. But it's going to refer to Mary when-- I'm just going to give it a subscript, Mary-- the first verb is told, yeah? So let me do this the easy way first. If there is such a thing as PRO, we ask ourself, these anaphors, they need to get bound within the smallest TP that contains the anaphor. And the smallest TP that contains the anaphor is going to be this, yeah? And so when we ask ourself "Is the anaphor bound by a noun phrase that's close enough to it?" well, the answer is yes here. Because "himself" gets to be bound by that PRO, "John." That refers to John. And so "himself" can be John. It can't be "herself" because the PRO is "John." It's not "Mary," whereas, when the higher verb is "told," then the PRO is "Mary." And so now "himself" is bad because "himself" can't refer to Mary. And "herself" is good. It can get bound by "Mary." So we can capture all of these facts if we're willing to posit the PRO, which refers to who it should refer to, the person who's going to do the defending. That'll depend on what the higher verb is. Interesting fact, which people work on-- what determines who PRO refers to? Why is it "John" for "promise," but "Mary" for "defend"? It's what's called subject control and object control. But once we believe in PRO, we have a handle on why the anaphors refer to who they refer to. OK? Now, let's do this the hard way. What if we didn't believe in PRO? Well, if we don't believe in PRO, woo, what we want is for this position to be bindable by "John," but not by "Mary." And we want this position to be bindable by "Mary," but not by "John." And this is going to be hard, right? The first one is going to be especially hard. So far, our conditions just say, anaphors have to get c-commanded by something that's close enough to them and corefers with them. But if "John" is close enough to that anaphor position, well, then surely "Mary" is. "Mary" is closer. There's no way to do that. I'll have to come up with something very complicated. And the second set of examples is also going to be kind of hard because "Mary" is the object of "told." So this anaphor will need to have a binding domain that gets up to the higher verb phrase, a domain in which its anaphor finds a binder. It's going to need to find a binder in a domain. We have to say, OK, we were wrong about TPs. It's, like, the next verb phrase or something. The domain in which it finds a binder has to be this new special kind of domain that doesn't seem to be useful anywhere else. It's a mess, basically. And PRO gets us out of the mess if we're willing to put PRO in the places where we want to put it. So one reason to be happy about binding theory is that it gives us a reason to make a move that we might have wanted to make anyway. These infinitives are acting like they have subjects in two respects. They are satisfying EPP, first of all. And they are acting like they have a subject that corefers with somebody in the higher clause, either "John" or "Mary," for purposes of binding theory. Binding theory is simplest if there is a pronoun there. Now, minor problem, we can't see a pronoun there. But if we're willing to rely on binding theory a little bit, we can use binding theory to convince ourselves that the pronoun actually is there. It's just that we can't see it. We have a new detector for pronouns. We can find them this way. Does that make sense? Are all of you appropriately skeptical? Is anybody inappropriately skeptical? Do you have questions about any of this chain of reasoning? It's a reason to take seriously the idea that there are invisible pronouns sometimes. We talked about invisible pronouns before, I think. I think I gave you examples like I can say "Defend yourself," yeah? I think somebody said that they were taught in school-- I can't remember who it was-- that the subject of an imperative is understood as "you," that this means "You defend yourself." And one reason to take that seriously is that, yeah, an anaphor here is acting like there is a subject around, or at least another noun phrase around, that refers to "you." Because "Defend yourself" is fine. But "Defend myself"-- no good, right? No matter how much you might want to say it, that's not the way to say it. You say "Defend me," no? It's acting like the subject is "you." OK, you can't hear it. You can't hear the "you," but there's a "you" there. You can use these anaphors to find it. Same thing here-- in "John promised Mary to defend himself," there's a pronoun in the subject of "defend." It's really "John promised Mary him to defend himself." And that "him," that PRO, is referring back to "John." In "John told Mary her to defend herself," that "her" is referring back to "Mary." Binding theory tells us so. Yeah, yeah, yeah-- slide after slide after slide. Yes. OK, here's another reason to be happy about binding theory, another thing that it teaches us, a new argument for something that I showed you an argument for before. And now, I will show you a new argument, possibly a simpler argument. Actually, I should start with this one. "Which picture of himself did John like best"? First-- setup for the argument. Observation, "John" sure doesn't look like it c-commands himself. But "John" used to c-command "himself." And apparently, principle A can be satisfied with nostalgia. So it thinks back to the happy days when "John" used to c-command "himself," back before you did wh- movement. And it's satisfied. Principle A, it's easy going that way. So this starts off with "John liked which picture of himself best?" "Which picture of himself" starts off as the object of "like." And it wh- moves. It becomes the specifier of CP. Shall I draw a tree for that, or is that clear? I'm sorry. Those were yes/no questions with opposite answers. Who would like me to draw a tree for that? Yes? OK, let me draw a tree for that. I'll hide these trees. So-- stamp. John-- yeah. Here's a noun phrase. I'm going to leave out "best" just because we don't need it. Here's your DP, "which," and N bar, "picture," prepositional phrase, "of," and the noun phrase, "himself." OK, so this is just "John liked which picture of himself." We could put a "best" in here. I don't know why I left out "best." We'll put it here. There, so "John liked which picture of himself best?" And then wh- movement will take this noun phrase and move it to make it the specifier of a CP, which is up here. So now, we have "which picture of himself" in the specifier of CP. And "did" ends up here because we've moved T to C and done do- support, which is something that Faith made me talk about. Yeah? So "Which picture of himself did John like best?" And "which picture of himself"-- which I'd spelled without the final H, just to give you an additional challenge-- "which picture of himself" starts off as the sister of "like." And wh- moves and becomes the specifier of CP. Yeah? That's the derivation for that. So fact, which we're going to have to get used to-- after we do the movement, "John" no longer c-commands "himself," right? What does "John" c-command? Well, we ask which nodes dominate "John." And it's these nodes. So "John" c-commands its sister and everything its sister dominates. So "John" c-commands "like which picture of himself best." But "which picture of himself" has moved out of that domain. It's up here. Apparently, the fact that "himself" used to be down here in a position that "John" could c-command is good enough. It's a phenomenon called reconstruction. It's as though, when you are evaluating things for binding theory, you get to rewind the tape a little bit and think, oh, yeah. This used to be over here. That's good enough. Various ways to talk about that, but that's one. So file that fact. Yes, I just said that. Thank you. This phenomenon is called reconstruction. You treat something which is moved as though it hadn't. OK. Now, let's think about longer, more complicated questions, like "Which picture did John think that Mary bought?" So "which picture" has wh- moved, right? But we talked about this question about whether it has moved once in one mighty leap, started off as the object of "by" and moved through all of that intervening material and became the specifier of the higher CP, or moved in a series of hops. Notice that "John thinks that Mary bought a picture" has two CPs in it. There's "that Mary bought a picture." That's the embedded clause. And then there's the matrix clause. And we raised the question. And I gave you all of these Dinka facts that were meant to get you to take seriously the possibility that the second picture is the right one, that movement is what is called successive cyclic. That is you move in short hops. You don't move in mighty leaps. So a wh- phrase, it's not just that it likes to move to the specifier of CP. It likes to land in every specifier of CP that it can find along the way. It moves in this series of short hops. And I gave you some Dinka facts that were meant to get you to take that possibility seriously. Here's another set of facts that might also get you to take that possibility seriously. Think about a sentence like, "Which picture of himself did John think that Mary bought?" First of all, it's fine. Do people agree? So "himself" can refer to "John." All right, now, let's think about why it's fine. "Which picture of himself" starts off as the sister of "by," starts off in the embedded clause. And it ends up, well, in the position where it's pronounced, in the matrix clause. Is "himself" bound by "John" in the position where it's pronounced? The specifier of the highest TP? Does "John" c-command that position? No, right? Shall I draw a tree for this? Would that be a good thing to do? Let's draw a tree. "Which picture of himself did John think that Mary bought?" Here's "John." "Think," there's the CP. "That," there's a TP. "Mary," here's T. And here's a VP and then "bought." And so we start off with "which picture," which I'm running out of room, so I'm going to use a triangle. Here's "which picture." I'm going to use a triangle, and I'm going to abbreviate picture as "pic." And the question we're asking-- and so here's the tree we start off with, yeah? Anybody wish to object to any aspects of this tree? Is this is OK? All right. And the question we're asking ourselves is, this noun phrase, does it move just once to here? So we have which picture of himself. [SNEEZE] Bless you. Does it move just once, or does it stop along the way here? That's the question we're asking ourselves. And now, we're going to use binding theory to come up with the answer. Does "John" c-command "himself" here? No. I just said "no" before I drew the tree. But now, that I've drawn the tree, you can see that the answer is, in fact, no. So "John" doesn't c-command this position. Does "John" c-command "himself" when himself is down here? Yeah, right? So what's the c-command domain for "John"? Well, it's its sister, that T bar, this T bar here, and everything this T bar dominates, so all of this. So "John" c-commands this position, "himself." Is this just reconstruction? Can "John" bind this? No, it's too far away, right? And so we can see that here. You can't say "John thought that Mary bought a picture of himself." So this position is too far away. And this position "John" doesn't c-command. If these are the only positions, then we are doomed. "Himself" can't be bound happily in either of those positions, doomed. But if "which picture of himself" spent some time here, well, then "John" c-commands "himself" when "which picture of himself" is here, right? This is inside this T bar, "John"'s sister, yeah? It's dominated by all the same nodes that dominate "John." And it's not contained in any TP that excludes "John." Remember, our conditions on how far away things can be and still bind say that anaphors need to have binders that are contained in all the TPs that contain them, that aren't excluded by any TPs. So if we do successive cyclic movement, which Dinka already told us that we should, if movement happens not in one big jump, but in two small jumps, then we have an account of this, the fact that the sentence is OK. The sentence is OK because "himself," yes, it's been here. Yes, it's up there, but it's also been here. And the fact that it spent time here is why the sentence is OK. Binding theory gives us a new reason to believe that, cool, so some of the cool things binding theory is for. Yeah. AUDIENCE: So it's like you go to multiple museums in one day, they give you a little stamp to get in the museum. NORVIN RICHARDS: Yes. AUDIENCE: And so at the end, you look a little bit different. You all have stamps to show that you've been to multiple museums. NORVIN RICHARDS: Yes, it's just like that. Yes. Yes. So "himself" has been here, and it's been up there. But it's also been here. And the fact that it was here meant that it got to go see the T-Rex skeleton while it was here. Yeah? OK, cool. So movement is successive-cyclic-- new reason to believe that. We have 15 minutes left, and I don't think I can do this next thing in 15 minutes. It involves guards. No, maybe I can. Let's start at least, and then this will give you something to think about until we get to Thursday. So consider this sentence. "Two guards seem to me to be standing in front of every building." Is that ambiguous? Yeah, it's ambiguous in a familiar way, right? It either means each building is guarded by two guards as far as I can tell, or there are two really weird guards who are guarding all the buildings at once. Yeah, it has those readings at least. How about this sentence? "Two guards seem to themselves"-- [LAUGHTER] Sorry. "Two guards seem to themselves to be standing in front of every building." This is not ambiguous. This means that there are two guards who need to be taken off duty. There are two guards who are having hallucinations of being very, very large. That's what this sentence means. It doesn't mean-- and there's something else it could mean. It could mean this building has two guards who think that they're standing in front of it. That building has two different guards who think that they're standing in front of it. That building has two other guards who are standing in front of it. In a sense, the reading that it actually has is the more reassuring one. It just means that there are two guards who really need a psych profile, whereas on the other reading there are however many buildings there are times two. There are that many guards who are in psychological trouble. So it doesn't mean all of the things that it could mean. And you might be able to guess the problem is themselves. That's what's going to prevent the other reading. But for reasons of time, let's let it prevent the other reading on Thursday. So come back on Thursday, and I'll finish this point, another thing that binding theory is for. We'll talk a little more about reconstruction, exactly what it is. Because it's fascinating. There's a lot of really cool work on trying to figure out what the heck it means. OK? Questions about anything else? All right, good. Go forth and think about linguistics. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_21_Semantics_Part_5.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: All right. So I think this is our last day for semantics. So we're going to do a grab bag of semantics-y topics today, and then we will pass on in to other topics starting next week. Last time, we ended on a cliffhanger. So let me remind you where we were. We said-- here we are with guards and buildings again. And we said that this sentence was ambiguous. So "Two guards seemed to me to be standing in front of every building" has the two familiar readings, the one where looks to me like every building has two guards, and the wacky reading where I believe there are two very large guards who are guarding all the buildings at once. You could have both of those readings. And then we had also convinced ourselves that that ambiguity goes away in this second sentence. So "Two guards seemed to themselves to be standing in front of every building" can only mean-- this belongs to somebody-- can only mean that there are two guards who have a hallucination in which they are very large. Or there's something else you could imagine it meaning. It could mean for each building, there are two guards who think that they are standing in front of it. That would, in a way, be more alarming. There would be more guards who had some kind of psychological problem. But they at least wouldn't be believing that they are impossibly large guards, just that they believe they're on duty. [INAUDIBLE] AUDIENCE: If we arrange the second sentence and say, "In front of every building, two guards seem themselves to be standing," is that the same thing? NORVIN RICHARDS: What do people think? If I say, "In front of every building, two guards seem to themselves to be standing," to the extent that you can say that, does it mean-- can it mean there are two-- for each building, there are twice as many guards as there are buildings? Twice as many people who believe themselves to be guards as there are buildings? Yes? AUDIENCE: I think that does stay closer to the two-- there are two guards for every building. But I could still interpret it as there are two very large guards or that the buildings all face each other. NORVIN RICHARDS: Yeah. I agree with you, actually, that it's-- I think it becomes ambiguous, I think, is what you're saying. So it gets the meaning that this doesn't have, but it still has this meaning. I think you're right about that. Yeah? AUDIENCE: I don't know. Somehow, it does feel to lean towards the other-- NORVIN RICHARDS: The other meaning. AUDIENCE: More so than the top one for me. NORVIN RICHARDS: Oh, I see what you mean. Yeah, the top one, it's easier to say, yeah, this is ambiguous. Yes. This is kind of reminiscent of when I was first showing you quantifier scope. We were talking about sentences like "Everyone in this room speaks two languages" and "Two languages are spoken by everyone in this room." And I think convincing ourselves that whether you have the active or the passive has some kind of effect on which reading is easiest to get, and there's something similar going on here, I think, that there's certainly more to say than some sentences are ambiguous and others are not. So some sentences lean in the direction of one reading or another. There's a lot of work to do here, which we won't try to do. This is almost the end of our acquaintance with quantifiers. But yes, there's lots of interesting topics to work on here. Did you have a question? AUDIENCE: Well, I was just going to say, rearranging that sentence seems to add extra ambiguity because now it now gives a sense of whether or not the guards are standing themselves. They could be sitting. They seem to themselves to be standing. NORVIN RICHARDS: Oh. Oh. Oh. I didn't even think about that. Right. Right. Right. I was-- yes. Oh, gosh. I was thinking about the "seeming"-- "stand" as just a-- meaning be, basically. But you're right. There's another way that things could seem and not be true. They could think that they're standing in front of the buildings, but actually they're lying down. These are guards with a different kind of problem. Yeah, good point. Yeah? So far so good? All right. So ambiguity in the first example, and ambiguity going away in the second example. And Raquel thinks, and I think we decided to agree with her, that you can get the ambiguity back in that second kind of example by moving "every building." Let's put Raquel's point aside and just concentrate on these for a sec. Remember way back when, we had the idea that a sentence like "Two guards seemed to be standing in front of every building" is ambiguous. Remember, we decided this ambiguity comes from the fact that "every building" can, but doesn't have to, undergo this operation of QR, quantifier raising, that gets it past two guards. And we also convinced ourselves that this operation of quantifier raising has some restrictions on how far it can go. The evidence for that had to do with sentences like the last one on this slide. "Two guards think that I am standing in front of every building" isn't ambiguous. Doesn't mean-- so it can only mean there are two guards, two particular guards, who are having real problems who seem to see me in front of every building. It doesn't mean there are two guards who think I'm in front of building 32, and there are two other guards who think I'm in front of the Student Center, and there are two other guards who think that I'm in front of the library. It doesn't mean that. So it only has the reading where "every building" hasn't undergone QR past "two guards." It only has the meaning where you interpret "two guards" first. It's like, there are two guards who have this property. They think that I'm standing in front of every building. It doesn't mean it's true of every building. The set of buildings and the set of things that two guards think that I am standing in front of has the following intersection. Do people see that? So there's no ambiguity in that last example. And our story about why there's no ambiguity in that last example, this description of that fact was to say, OK, we're learning something about QR. QR can't-- when you have two guards that are standing in front of every building, "two guards" and "every building" are close enough that QR can reorder them, and you get ambiguity. But in this one, "Two guards think that I'm standing in front of every building," "every building" can't QR that far. And then that gets us into question. So how far can QR go? How does it work? We got started on that question. Do people remember all this? This is all review. This might make sense. Did you have a question, Katrina? AUDIENCE: Yeah. What does it mean that "two guards" can reconstruct back into the embedded clause? NORVIN RICHARDS: Oh, OK. I'm sorry, I'm doing this slide out of order. So I'm talking about the last sentence. In the last sentence, there's no ambiguity, and we convinced ourselves that's because "every building" can't get out of that embedded clause. So question, what counts as a clause? How far can QR go? The fact that there's ambiguity in "Two guards seem to be standing in front of every building," we said, we could have handled that in two different ways. Here, let me write that on the board. For some reason, there are-- here we go. There are many erasers on the board, but there is no chalk. I think someone is trying to send me a message. So "Two guards seem to be standing in front of every building." So recall that for independent reasons, we thought that "two guards"-- in a sentence like this, "Two guards seem to be standing in front of every building," "two guards" is here, but it starts down here and moves. We have this NP movement that took things from embedded clauses and moved them into the subject of things like "seem" in order to satisfy the EPP was the story. So we have this operation and movement. So now there's this open question. We want QR to be able to get "every building" past "two guards." But "two guards" is in two places. It's in the lower position. It's also in the higher position. So we could say QR can go from here all the way to the beginning. That's a way of saying that embedded clause does not count as a clause. QR can get out of that kind of clause, that infinitival kind of clause. But another kind of thing that we could say would be no, the reason this is ambiguous is that "every building" can get this far to the edge of the embedded clause. And so it can't, in fact, get this far. Can't get out of that embedded clause. Basically, this involves saying QR can't escape TP. So it can't escape any kind of TP. Not a finite clause like the one in the last example here, "Two guards think I'm standing in front of every building," and also not an infinitival clause. And we saw before some reasons to think that that's right, that the only reason this is ambiguous, it's not because "every building" can get out of an infinitive. "Every building" is stuck here. It can't get any higher than this. And the reason that this is ambiguous is that "two guards," yes, it's pronounced here, but it can be interpreted here via reconstruction. Which is just a name for this mysterious fact. You can interpret things lower if they moved from lower positions. We saw some evidence for that idea before. I'm about to show you another bit of evidence for that idea. But is this much clear? It's late in the semester. Probably almost nothing is clear anymore. It's spring outside. Yeah, it's very hard to make things clear. Do people want to ask questions about this? OK, all right. So here's a new bit of ambiguity, a new bit of evidence that this is right, that this is the right way of handling the ambiguity for the first example in this slide, "Two guards seem to be standing in front of every building." So we'll say, the reason that's ambiguous is that QR, first of all, can't get out of TP. It can't get out of that embedded infinitival clause. The earliest it can get is here. So "to be standing in front of every building." And the reason you have ambiguity there is that "two guards" has the option of being interpreted down here, this mysterious option that we've been seeing for movement, that you have the option of interpreting things as though they were lower than they actually are, as though they are still where they used to be. Here's the bit of evidence. "Two guards seem to themselves to be standing in front of every building," we decided that's not ambiguous. "Every building" can't take scope above "two guards." Well, let's think about what would happen. I'm going to add "to themselves" here. We're fooling around with the possibility that the ambiguity in "Two guards seemed to be standing in front of every building" is there because "two guards" has the option of reconstructing into the lower clause, has the option of being interpreted as though it was still down here. But that's why that sentence is ambiguous. So why isn't this sentence ambiguous? Well, what we want is for something to prevent the "two guards" from reconstructing down to here if the "two guards" had to be up here, if they couldn't be interpreted in this lower position. Well, then QR wouldn't be able to get past them because the hypothesis is that QR can only get this far. Is there something that will stop the guards from reconstructing? Why shouldn't "two guards"-- suppose we got rid of "two guards" up here. So we would have "Seemed to themselves, two guards, to be standing in front of every building." Would anything go wrong with the interpretation of that? Recall that what makes the sentence unambiguous is putting in "to themselves." If "to themselves" is not here, if it's just seem, well then "two guards" can reconstruct. But when we add "to themselves," it can't anymore. It has to stay up high. What's "themselves"? (What are "themselves"? What am "themselves"? What??) AUDIENCE: Anaphore? NORVIN RICHARDS: It's an anaphore, right? It's a reflexive. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yeah, that's what you were about to say. It's a reflexive. Somebody just had a great idea. It's a reflexive. Reflexives are subject to condition A, right? What does condition A say? What do reflexives need in life? AUDIENCE: They have to be bound. NORVIN RICHARDS: They have to be bound, exactly. They need something to bind them. They need something that c-commands them and co-refers with them. That's why you can say, "The guards like themselves," but not Themselves like the guards." "Themselves" needs to be c-commanded by "guards," or some other noun that refers to them. So what would go wrong if you reconstructed "the guards" down here if you have "themselves" here is that, yeah, "guards" would be able to be scope ambiguous with respect to the buildings. But "themselves" would not have a binder anymore. What we're learning-- so when I was first showing you some examples of reconstruction, I think I said something like, it's as though anaphores get to look back on the past and say, well, I'm not bound anymore, but I used to be. And that's good enough. Something like that. This is showing that that's not actually quite the right way to think about reconstruction. It's more like if you move something, it exists in various positions in the clause. But you have to pick one for interpretation. For things like quantifier scope and binding, you've got all of these places where it is simultaneously. But the waveform has to collapse in order to give it a scope and an interpretation for binding. You have to pick one of the positions for it to be in. You run the experiment, you detect the electron or whatever it is, and then you know where the guards are. And once where they are, they have to be in a position where they behave as being just in that position both for anaphore binding and for scope. That's what these data seem to be telling us. So "Two guards seem to be standing in front of every building." "Two guards" is in two places simultaneously. And you get to choose, is it in the higher position or the lower position? "Two guards seem to themselves to be standing in front of every building" forces you to pick a particular choice. You have to keep "the guards" up high so that "themselves" will have a binder. And by choosing that, you also make a choice about how quantifier scope is going to work. You make it so that the quantifiers can't be ambiguous anymore because the guards are too high. The buildings cannot get to them. QR doesn't go that far. That's the way quantifier scope seems to work. It's the way reconstruction seems to work. So the moral of all this is reconstruction involves picking a particular position to interpret a moved phrase in. I gave you this idea or this metaphor of you've got this thing, which is moving, and you get to rewind the tape and decide-- look at it in a different position, not the position where it's actually being pronounced. And you can think about it that way if you want. But the point is you have to pick one particular position as the position where it gets-- you don't get to say, OK, it'll be here for anaphore binding, but for quantifier scope, it will be here. That's not the way it works. You pick one spot for it to be for all of its interpretive properties. This way of-- this fact about reconstruction, just the existence of reconstruction and this fact about how it works, is sometimes taken as a bit of evidence for a particular way of talking about movement, what's called the copy theory of movement. And so I just want to show you that and then we're going to leave this whole area behind for a little bit. The copy theory of movement. So when we started syntax, cast your mind back to when we started syntax, when you were younger and more carefree. When we started syntax, I started by saying something like, syntax is going to be a little bit like morphology. So cast your mind back to morphology. We said we're going to create these tree structures via repeated application of merge. So we're going to take things and we're going to put them together and make larger things. And Merge is going to be recursive in the sense that it gets to take things that it created and merge them with other things. So you can merge things that you take out of the lexicon, morphemes, or words, or whatever, and you can merge those things together to form larger things. But you can also merge the phrases you've created together with other phrases. So for example, suppose we want to construct the embedded clause of "I don't know what Mary will eat," where I'm using an embedded clause just so we won't have to think about what the auxiliary does. So the idea was you're going to do the embedded clause by merging "eat" with "what." Remember, we have the idea that forming this kind of embedded clause involves moving "what." And so we're going to start "what" as the object of "eat." So we'll take these two things from the lexicon, "eat" and "what," and we'll merge them together to make this bigger thing, this verb phrase. And then we'll take that thing, that verb phrase that we created, and we'll merge that together with a new thing that we'll get from the lexicon, "will." Yeah, so we'll take "will." We'll put that together with the verb phrase. Now we have an even larger thing and we have "will eat what." And then we take that thing that we've made, and we merge it together with a new thing that we get out of the lexicon, "Mary." So we take "Mary," and merge that together with T-bar. As a comparatively simple example, if it were, "I don't know what the students will eat," then I'd have a bit where it had to create "the students" by merging "the" with "students." I have a noun phrase. I merge my noun phrase with my T-bar, and now I've got my TP. So far so good? This is-- I'm just building a tree. One more step, I'll grab one more thing from the lexicon. We'll merge a C with what we've created so far, a C and a TP. And now it's time to do wh-movement. And the idea behind the copy theory of movement is look, this is just another case of Merge. So all of our cases of Merge, we've always had the option of merging either something we've created via Merge, we've merged the VP with something, merged the T-bar with something, merged the TP with something, or grabbing something new from the lexicon and putting that in. We've always had that option right along. And the idea is this thing that we're calling movement, it's just another example of Merge. We'll take this C-bar that we've created and we'll merge it with something that's already in the tree, with "what"? So we've got-- now "what" is in two places. So it's in the place where we merged it first and it's in this new place that's up at the top. And so we end up with "What Mary will eat what," which raises certain questions. Like why don't we say, "What Mary will eat what?" So we need a theory of what happens when you merge a single thing in two places. But the result is that "what" is now in more than one place in the tree. And that's all that Merge-- that's all that movement is. So I've been showing you trees where we created trees via Merge and then there were arrows. So you took things out of the places where they were, and you put them in new places, and they weren't in the old places anymore. And now all this talk about reconstruction makes it look like that was misleading. There's a sense in which thing-- when you move things, they're in all of the places where they ever were. For interpretation, it sure looks like that's the way to think about it. You can interpret them as though they were in any of the places that they've ever been. And the copy theory of movement is meant to be a way to handle that. It's as though when you move things, what you do, really, is take something that you've already merged in the tree and you merge it again. And then there have to be principles that explain where you pronounce it when you merge it more than once. Do you pronounce this "what Mary will eat?"-- "I don't know what Mary will eat." Do you pronounce it, "I don't know, Mary will eat what"? We've talked about the fact that some languages have wh-movement and some don't. So maybe there are languages that pronounce the higher copy of "what," that's English, and languages that pronounce the lower copy of "what," that's, like, Mandarin, say. And similarly for reconstruction, the way to talk about this is yeah, there are two copies of "what," and you have to pick one to be the one that you'll interpret. And the same deal with "two guards," or whatever all else. So "what" is now in two places. Yeah? AUDIENCE: Do you always pronounce the higher? NORVIN RICHARDS: So one theory about what's going on is in English, you pronounce the higher of two copies when you're doing wh- movements. But maybe the-- so we've said there are languages in which the way to ask "What did Mary eat?" is to say literally, "Mary ate what." So Mandarin is like that, for example, or Japanese is like that. There are lots of languages like that. And maybe a way to talk about those languages is as far as movement goes, they're doing all the same movement that English is. They're just pronouncing the lower copy. And English is pronouncing the higher copy, and those are languages that pronounce the lower copy. That's one way to talk about what's going on. So copy theory of movement is meant to be responsible for these data about reconstruction, which really seem to make it look like it's misleading to say when you move something-- we talk about it as movement because, well, we're pronouncing it over here and not over there. And that led us to think of it as now it's here and it's not there anymore. Reconstruction makes it look like no, there's some sense in which it's simultaneously in all the places it's ever been, and you get to choose which of the places to interpret it in. And maybe there's something similar going on with pronunciation, as I just said. I don't know if anybody remembers Finnish vowel harmony. We're doing nostalgia today. So I've asked you to remember syntax, I've asked you to remember morphology. Now remember this thing that we did very fast about phonology. So there's this observation about Finnish that properties like front or back for vowels in Finnish are just constant across the word, more or less. So there are words in-- Finnish has what's called vowel harmony. There are words in which all the vowels are back and words in which all the vowels are front. So there are words like [SPEAKING FINNISH],, which is the Finnish word for "table." And if you want to put that in the case that expresses the meaning "on the table," there's a suffix, which, when you add it to [SPEAKING FINNISH],, is pronounced lah, with an ah with the front vowel because [SPEAKING FINNISH] has front vowels. And so when you add "la" to it, you get lah. If you added la to the word for "chair," which is [SPEAKING FINNISH],, you'd get [SPEAKING FINNISH].. So the vowel would be back in the la. And the idea that I floated, is an old idea in phonology, is that we should think of properties like front or back in Finnish as being smeared across the whole word. So it's not like for every Finnish vowel in a Finnish word, you get to specify whether it's front or back. Front or back is just a property of the whole word. It's in all of these places at once, this property of being front or back, and it appears in more than one place at once. What I'm talking about now, copy theory of movement, is like that for syntax. There are things that are in more than one place at the same time because you merged them to the tree over and over again. And then it is the job of the phonology and the semantics to take that property of these syntactic representations and figure out what the heck to do with them. So syntax is handing phonology structures that literally look like "What Mary will eat what?" And phonology has to look at that and go, what? Wait, which of these? Where am I going to pronounce-- I have to pronounce "what" once, and I have to choose which of these places to pronounce it. And maybe semantics is doing something similar. Maybe that's what all this talk about reconstruction is doing. Semantics similarly looks at that representation and says, oh, I've got this thing that's in more than one place. I can only interpret it in one place, and I'll pick one. And then big literature figuring out how do we pick? What are the rules about where we pick? Lots of work on this. Copy theory of movement. So yeah, reconstruction involves picking a particular position to interpret a move phrasing, both for binding theory and for quantifiers. Topic shift. I think I can promise that I will never again talk about guards and buildings. So if anybody has any questions about guards and buildings, go ahead. Get them out of your system. And we're going to leave them behind. No more guards, no more buildings. Let's talk about ellipsis again. We talked about ellipses before. So this is a property of sentences where you can leave out parts of the sentence. Language is varied with respect to how much of this they do. English has VP-ellipsis, which is comparatively exotic. There are other languages that has VP-ellipsis, but it's certainly not every language. So VP-ellipsis is this option that English and some other languages have where if you're about to repeat a verb phrase, you have the option of just leaving out the second verb phrase. So you can say things like, "Adam ate an apple and Eve did, too," where the missing verb phrase is "ate an apple." There's another phenomenon which is a lot more cross-linguistically common. It's called sluicing, where you get to leave out most of a question. So you can say things like, "Adam ate something, but I don't know what" where this means "I don't know what Adam ate." So what's been left out is the rest of the question, "I don't know what Adam ate." All you pronounce is the "what" part. This is called sluicing. There's an old name for it. It's called that because-- I think I've made this mistake sometimes on slides where something that's called-- we've been calling TP, which people these days usually do call TP. It used to be called IP. It's just a different name. So now we call it a tense phrase. It used to be called an inflection phrase. And we've had some slides where I accidentally left in an IP and I apologized, and had to go in and fix it before I posted the slides. I hope I managed to do that for all the slides I posted. Before it was called an IP, it was called an S. So if you-- S stood for sentence. So it was before our terminology got as sophisticated as it is today. So S stood for sentence. And sluicing was a cute name for this. The idea was when you say "I don't know what Adam ate," you're getting rid of "Adam ate." This is the S. So you are "S-loosing." You're getting rid of the S. Yeah, so that's how it got the name "sluicing." Yeah, it's a cute name. I'm sorry, that's the only excuse I have for telling you that this thing used to be called S. That, and that if you ever read ancient literature in syntax, you'll sometimes see it called S. So VP-ellipsis. Fairly common phenomenon. Sluicing, possibly a universal phenomenon. There's a lot of work on whether things that look like sluicing really are sluicing in various languages. And at least one way of understanding what's going on here is that these are-- so there are a couple of ways of understanding what's going on here, and we will talk about them. But here's one way you could say, yeah, it's possible sometimes to just refrain from pronouncing parts of a sentence. So if you have built a sentence with the structure "Adam ate something, but I don't know what Adam ate," well, you can say that. But you also have the option of refraining from repeating the "Adam ate" at the end there. You can just leave it out. That's one way to talk about it. There's another way to talk about it, of course, which is to say, no, look, if you say "Adam ate something, but I don't know what," well, you have said, "Adam ate something but I don't know what." We need a structure for that where there's no sentence after what. There's just nothing there. And yeah, it's the job of the semantics to interpret that as "I don't know what Adam ate." But the syntax is not building a structure, "I don't know what Adam ate," and then refraining from pronouncing part of it. There's none of this ellipsis stuff. You're just-- we have a pretty sophisticated way of interpreting things like "I don't know what." Do people see the difference between these theories? And one of them says the syntax always creates complete sentences, and then the phonology has the option of refraining from pronouncing parts of them. Yeah. The other says, no, the syntax has the option of creating things that are not complete sentences, and then the semantics just has to deal, figure out how to interpret those things. Those are two takes that people have had on this kind of phenomena. Yeah? Now let me give you one reason to take one of these takes seriously, more seriously than the other one. Though, they are both live options, and people argue for both of them. Different people. Some people argue for one, other people argue for others. Most people do not argue for both. Here's one. We've talked about preposition stranding. So preposition stranding is this fact that in certain strange and exotic languages like the one I'm speaking, you have the option of asking two kinds of questions if you want to ask a question about the object of a preposition. You can ask questions like "Who was he talking with?" Or you can ask questions like "With whom was he talking?" That is, if you want to ask a question about the object of "with," you can just wh-move the object of "with," and leave the "with" behind: "Who was he talking with?" Or you can move the entire prepositional phrase: "With whom was he talking?" I, at least, would much rather say the first of those. But some people are able to say the second one. It depends on how savagely you were beaten by English teachers in high school. So your English teachers in high school maybe tried to convince you that the right way to speak English involves the sentence on the right. They did that-- I mean, they had the best of intentions. They did that because most languages only have that second option. So English is quite unusual in having that first option, and scholars of the English language sometime in the 1700s decided that English would be much cooler if it were like every other language. It's basically grammatical peer pressure. So they were like, wouldn't it be nice if English were more like Latin, and French, and other civilized languages where you can't leave prepositions behind? So there are some languages that have this option of what's called preposition stranding, leaving "with" behind and wh-moving the object of the preposition out, as in "Who is he talking with?" Most languages don't have that option. You only have the option on the right. This is all-- we've said all of this. So I say we should be proud of the fact that we can say, "Who was he talking with?" It should be on our currency. It should be part of the official motto of the US. "Who are you talking to?" That should be our motto. No? Now as I said, most languages are not like this. So here are some Russian examples. In Russian, this is Russian for "Peter was talking with somebody but I don't know whom." And similarly, let's concentrate first on the sentences at the end. In Russian, Russian is like most civilized languages in which you cannot say, "Who is he talking with?" You can't say, [SPEAKING RUSSIAN].. You have to say [SPEAKING RUSSIAN].. So "With whom was he talking?" So that the preposition "with," which is [SPEAKING RUSSIAN] in Russian. Russian has longer prepositions than that, but that's one of them, prepositions that are just a consonant. You have to take that preposition along with the wh- word. And relatedly, possibly, Russian has sluicing. But in Russian-- so let's go back to English. In English, if you do sluicing, you have two options. You can either say, "Peter was talking with somebody, but I don't know who," leaving the preposition out. Or you can say, "Peter was talking with someone, but I don't know with whom." Yeah, you can say that, too. Russian only has the second option. You can only say the Russian version of "Peter was talking with someone, but I don't with whom." You can't say in Russian, "Peter was talking with somebody, but I don't know who." And this is English and Russian, but this is quite widespread. It's generally true. There are some interesting counterexamples, which people get interested in and try to figure out what's going on. But it's generally true that if you are like English in allowing prepositions to be left behind, you can leave prepositions out in sluicing. And if you are like Russian in which your prepositions have to move along with your wh- phrase, then your prepositions have to be kept under sluicing. So let's think about what that means for these theories of sluicing. The two theories of sluicing again, one of them-- let's take the first one first. One of them said, when you say "Peter was talking with somebody, but I don't know who," well, what you're doing is you're-- the syntax is generating "Peter was talking with someone, but I don't know who he was talking with." And then you leave out everything in the lowest sentence except for the wh- word itself. So you start off with "Peter was talking with somebody, but I don't know who he was talking with," and then you leave out "he was talking with." So you end up with "I don't know who." Now that's a story where sluicing involves creating a complete sentence and then electing to not pronounce part of it. If we have that kind of theory of sluicing, then the facts in English and Russian boil down to a single difference between the languages. So the single difference between the languages has to do with whether you have to take prepositions along with you when you wh-move the object of a preposition, whether you have the option of saying "Who is he talking with?" So there's a relation between the fact that in English but not in Russian, you can say, "Who was he talking with?" And the fact that in English, but not in Russian, you can say, "Peter was talking to somebody, but I don't know who." Those are both the same fact. The fact is when the syntax is constructing the sluiced example, in English, you have the option of saying, "I don't know who he was talking with," and then leaving out "he was talking with." In Russian, you don't have that option. You have to say, "I don't know with whom he was talking." And that's why when you do sluicing, you have to say, "but I don't know with whom," and you can't say, "I don't know who." So these facts are related to each other. If you can strand prepositions, then you can leave them out in sluicing. And those facts are connected if sluicing involves, well, actual wh-movement, together with a decision to fail to pronounce part of a sentence. Yep, that's one argument for that approach. On the other hand, if you have the kind of approach that's like, no, look, there's no-- you don't always create complete sentences. The syntax when you say, "Peter was talking with somebody, but I don't know who," the syntax is just creating "I don't know who." And then it is the job of the semantics to figure out what the heck you mean when you say that, and to supply the rest of the sentence from context. Well then, it's less clear why these facts should track each other, the fact about whether you constrain prepositions and the fact about whether you can leave it out in a sluice, because the sluice doesn't, on that kind of theory, literally involve any wh-movement. Another similar kind of fact. German has verbs that assign entertaining cases to their objects. So English does not have a whole lot of case morphology, but way back when, we talked a little bit back in syntax about case morphology, this idea that nouns have morphology on them telling you roughly what they're doing in the sentence. So we talked about the fact that there are languages that have a marker that says, hey, I'm the subject, or hey, I'm the object. There are languages that have other markers, these case markers, we called them. German has more of this stuff than English does. And in particular, German has this phenomenon sometimes called quirky case, where there are verbs. One of the fun things about-- one of the many fun things about learning German is that if you want to learn a verb like "schmeicheln," which is the German verb to flatter, first of all, you must learn to pronounce it possibly better than I just did. And second, you must learn that its object is dative. Why? Well, it just is. So part of learning German is learning that if you flatter someone, you flatter them datively. To say that the object of "schmeicheln" is dative is to say that when you want to say he wants to flatter someone, the word for someone is "jemandem" and ends in an M, which is the mark that it's dative. If you wanted to say, for example, he wants to praise someone, well, praise, "loben," is like most transitive verbs in that its object is accusative. So now someone isn't "jemandem," it's "jemanden." A fun, quirky fact about German. If any of you were considering learning German, this is one of the many things you will get to learn. Yeah? Yeah. Now cool fact, if you sluice in German-- German's like English, it has sluicing. If you sluice in German, if you want to say, "He wants to flatter somebody, but they don't know who," you can say that in German. If you want to say, "He wants to praise somebody, but they don't know who," you can say that in German, too. But if you say, "He wants to flatter somebody, but they don't know who," well, the word for "who," it's like the word for "someone." It has different dative and accusative forms. So as you can see in the slide, "He wants to flatter someone, but they don't know who," the word for "who" has to be "wem". Has to be dative. Whereas "He wants to praise somebody, but they don't know who," the word for "who" has to be "wen," has to be accusative. So let's think about what that means on the two approaches to sluicing. On the approach to sluicing where sluicing involves creating a complete sentence and then forgetting to say part of it, just one theory of sluicing we talked about, well, this makes sense. Because that first sluicing example, you're saying, in German, "He wants to flatter somebody, but they don't know who he wants to flatter." And "who" starts off as the object of "schmeicheln," a version of "schmeicheln," that you're not actually going to say. And because it's the object of "schmeicheln," it is dative, and so it gets pronounced with dative. And then if you want to say, "He wants to praise somebody, but they don't know who," well, you start off by constructing, "He wants to praise somebody, but they don't know who he wants to praise." "Who" starts off as the object of "praise," and so it's accusative. And so we have an account of why these words for "who" are dative or accusative, depending on the properties of a verb which you cannot hear. The idea is yeah, you can't hear it, but it's there. And it does its thing. It makes "who" either dative or accusative, and then you move "who" out of there, and then you forget to pronounce the verb. Yeah, that's how sluicing works. On the alternative approach to sluicing that says, no, it's just semanticists are smarter than you think they are, you can give them, "He wants to flatter someone, but they don't know who," and it will be their job-- that will just have all the structure that you can hear. There's no complete sentence down there. The end of the sentence is just "who." Well, the semanticists will have to be so smart that they can know that the verb "to flatter" in German assigns dative case to its object, even though "who" is, in some sense, not the object of "flatter" on that story. It's never been anywhere near the verb "flatter." So there are some complications for that approach in these kinds of facts. So those are two reasons to take seriously the idea that ellipsis, this has been concentrating on sluicing, is a process in which you create a complete sentence, a syntactically present structure. When you say "John is eating something, but I don't know what," you really are saying "John is eating something, but I don't know what John is eating." And then you are leaving out "John is eating" from the end of the sentence. So there's this process of ellipsis that gets rid of stuff that's syntactically present so that it's not pronounced. But it is interpreted. And it can have effects on things like case, as we saw in German. Now let me show you one bit of evidence for the alternative approach, actually, just so that you can see why people take the other theory seriously. Here are some more sluicing examples. The classic example's from the literature. "She bought a big car, but I don't know how big." Fine. "A biography of one of the Marx brothers is going to be published this year. Guess which?" Also fine. Do people agree that these are fine? I think they're fine. All right, let's think about what would be in the part that's not pronounced. The way we've been talking, these should mean, "She bought a big car, but I don't know how big she bought a car," or "A biography of one of the Marx brothers is going to be published this year. Guess which a biography of is going to be published." And the problem with that is that these are not grammatical sentences. Do people agree with that? If you were to pronounce the whole thing, you would die. Well, maybe not die. It would depend on your previous medical condition. Yeah? AUDIENCE: [INAUDIBLE] published. NORVIN RICHARDS: I'm sorry, say it again. AUDIENCE: I meant-- NORVIN RICHARDS: You're just-- AUDIENCE: You said you would what? NORVIN RICHARDS: Oh, you're just testing my claim that if you said this aloud, you would die? Oh, gosh. So-- AUDIENCE: "A biography of one of the Marx brothers is going to be published this year and guess what the biography of (blank)." Whatever goes in the blank is going to be published. OK. NORVIN RICHARDS: How do people feel about this? I didn't say that people were going to kill you. I said you were going to die. So you shouldn't feel like you have to harm him in any way. Raquel, did-- AUDIENCE: I think the way that I think of filling it is-- "I don't know how big the car is," or "I don't know which biography is in which--" thinking [INAUDIBLE] it feels kind of weird. It feels it's weird movement going on there. NORVIN RICHARDS: Yeah. Yeah, yeah, yeah. So you seized on the move. So I said, here's a bit of evidence for the opposing camp. So the opposing camp, the people who think, no, in a sluice, there's no wh-movement going on. When you say, "I'm reading a book--" "I'm reading a book, but I can't remember what." "John is reading a book, but I don't know what." "John is reading something, but I don't know what." That sentence just ends with "what." It isn't "but I don't know what he's reading." So you don't have the complete sentence, and you're leaving part of it out. These people want to say, no, there's no wh-movement going on at all. And these kinds of examples are the kinds of examples they point at. They say, look, we know how wh-movement works. You can't do it out of certain kinds of things. We talked about in this class, that there are what are called islands, domains that you can't move out of. These are not among the islands that we talked about. I think I warned you when I showed you some islands. There are so many islands out there. It's like the Pacific out there. There are lots and lots of islands. Lots of work on charting them and figuring out how deep the water is in different places. And so here are some more islands. We don't have to worry about what they are, or why they're islands. It's clear that if you were to say this whole sentence, it would be bad. Bad things would happen to you. And the people who want to say, there's no wh-movement in a sluice, they point at stuff like this. The people who say, yes there is, too, wh-movement in a sluice, what they say is what Raquel just said. They said, no, but look. How do you know that you're not saying "She bought a big car, but I don't know how big it was." Or "A biography of one of the Marx brothers is going to be published this year. Guess which one it is"-- "...Guess which Marx brother it will be." Yeah, another possibility. Kateryna, did you have a question before? Or I'm sorry, I didn't mean to put you on track-- spot. So that's an argument that goes on between these two halves of the field. Yeah? AUDIENCE: Isn't this the essence of one of the questions that [INAUDIBLE],, and also that there might be wh- movement or there might not be, and that there are certain things that you can't move things out of? NORVIN RICHARDS: Yeah. Yeah, yeah, yeah. Yeah, no. Absolutely. So this is-- one of the things I love about linguistics is that it's very-- here we are in 24.900 and it's not all that hard. I mean, we're late in the semester. I've had to teach you a bunch of things in order to get you here. But where I've got now is standing right on the edge of the abyss looking out. So it's not clear where the bridge is, that will get us across the abyss. These are two ways that people deal with this problem. One is to say, no, there's always wh-movement. And what's going on here is that you get to rephrase these things. Notice that you have to be careful about this to avoid losing the results that we got from the previous slides. So why can't a Russian say, "He wants to talk with somebody, but I don't know who it is," leaving out the "with," which they can't do? So we have to understand what's going on in those kinds of examples. There are other things people say, but this is the place where the field currently is, trying to figure out what to do about this kind of clash between data. Cool. Questions about any of that? OK. So moral of all this, and then we will switch topics again, what we're seeing, maybe we knew this before, is that our best take on how-- so here we are, almost at the end of semantics. And what we're learning is if you are trying to interpret a sentence, what you are interpreting is not necessarily exactly what you hear. So there are phenomena like reconstruction where it sure looks like you get to take something that used to be in one place and is now in another place and interpret it as though it was still where it used to be. So "Which picture of himself did he buy?" is grammatical, because you have the option of reconstructing "Which picture of himself" to a position where he "himself" is still bound by "he." so we know that "himself" needs to be c-commanded by "he," and it isn't in the move to position. Or we think that there is such a thing as QR, quantifier raising, which creates ambiguities like the one in "Someone loves everyone." By taking "everyone" and moving it to a position above "someone." Yeah, we saw some evidence for that. In English, you can't see that, at least you don't have to say "Everyone someone loves" in order for the sentence to be ambiguous. You can say, "Someone loves everyone" and get an ambiguous sentence. We saw that there are languages, like Hungarian, where you do get to see the movement, but there are plenty of languages where you don't, like English. Or ellipsis, where we've seen some arguments, some of them pretty compelling, that when you say "She bought something, but I don't know what," that it's really "She bought something, but I don't know what she bought," and you are interpreting something that's larger than what you can hear. There's this option of leaving out some of the stuff that's syntactically there. So interpretation is complicated for all kinds of reasons. There's all kinds of interesting stuff semanticists talk about. But one of the things that they have to do is figure out what exactly is being interpreted. And it's not just a matter of figuring out what you heard. There's stuff that you didn't hear that's going on. Part of what makes semantics an interesting field. Complete shift in topic. No more talk about reconstruction, very little talk about syntax. Talk about something completely different. So if anybody would like to continue to discuss sluicing, or guards, or buildings, or ellipses, or any of this stuff, this is the time. So other kinds of things semanticists talk about. We'll see how much of this we get through before we have to stop. Consider a sentence like "I will only give Mary three cookies." That's a very ambiguous sentence. It can mean-- here's one thing it can mean. No, actually, let me say it a different way. You can say, "I will only give Mary three cookies." You can say, "I will only give Mary three cookies," which means something different from "I will only give Mary three cookies," which means something different from "I will only give Mary three cookies." Yeah. I think you can also say, "I will only give Mary three cookies." Yeah. I will not-- yeah. Yes? AUDIENCE: Another example I've seen of this is "Congratulations on your baby!" NORVIN RICHARDS: Yes? AUDIENCE: Or if you-- NORVIN RICHARDS: Thank you. AUDIENCE: If you put a focus-- every single word in the sentence except for "on," if you put your focus on that word, it means something completely different. NORVIN RICHARDS: Oh, I see. Yes. AUDIENCE: You can't say that. NORVIN RICHARDS: "Congratulations on your baby, not under your baby." Yes. Yeah, yeah, yeah. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: "Congratulations on your baby," yeah. So let's go back to the cookies, though. The phenomenon is called association with focus. And the idea is if I say something like "I will only give Mary three cookies," what I'm doing is inviting you to consider all of the sentences of the form, "I will give x three cookies." And I'm asserting all of these are false, except for the one where x is Mary. That's what that means. So "I will only give Mary three cookies" means I will give Mary three cookies, and I will not give three cookies to Bill, or John, or Susan, or Fred, or anybody else that's relevant. Similarly, if I say "I will only give Mary three cookies," it means I will give her three cookies, but I will not give her three hamburgers, or three hot dogs, or three cakes. Only three cookies. So association with focus, it's this interesting phenomenon that shows up where, depending on which of these words I'm saying loudest, what I'm inviting you to do is consider other sentences where that word has been substituted for something else. And in this particular case, what I'm asserting is that all the other sentences are false, all the ones except the one that I've said. People generally decompose this into two parts. There's focus, which is the putting special emphasis on a particular word thing, which invites you to consider all of the alternatives to that word. And what you're supposed to do with the other alternatives to that word depends on what goes on in the rest of the sentence. So "only" associates with focus to say, all the other alternatives are false, give you false sentences. So "I will only give Mary three cookies" means I will give Mary three cookies, but I won't give three cookies to anyone else. If I say, "I will even give Mary three cookies," that means something like I will give Mary three cookies, and she is the least likely person for me to give three cookies. We all know that I hate Mary. But I'm going to give three cookies to everybody, even Mary, who I hate, and also she's diabetic. But I'm going to give her three cookies anyway. Yeah? Or "I will even give Mary three cookies" means I will give her three cookies, not just other numbers, like two or one. That's what that means. "I will give Mary three cookies, too," actually says, I will give Mary three cookies, and there's at least one alternative, which is also true, I guess. So "I will give Mary three cookies" means-- "I will give Mary three cookies, too," means I'll give Mary three cookies in addition to the other person that I'm going to give three cookies to, which maybe you already know about. Yeah? AUDIENCE: In the middle one, if you emphasize "cookies," it would be like giving her three of other things, but you're also throwing in three cookies. NORVIN RICHARDS: Yeah. So "I will give Mary three cookies, too." So this is another interesting aspect of association with focus. Yes, so it can mean what you just said. I'm going to write that down. "I will give Mary three cookies, too." It can mean, I will give her three cookies and I will also give her three of something else. You're absolutely right. I feel as though it can mean something else as well. "I will give Mary three cookies, too." Can people get that to mean something else? I feel as though it can also mean, not only will I give her three cookies, but I will give her a bicycle. That is, the other thing that I will give her, it doesn't have to be three. So I will give Mary a cake, I will give Mary a pie, and I will give her three cookies, too. Yeah. So this is an interesting aspect of all this that people work on a lot. So I've just been talking about putting stress on the word and saying you get to consider all the alternatives to that word. And that is true, that if you put stress on a word, you get to consider all the alternatives to this word. There's this phenomenon is sometimes called focus projection. And one way to think about it is, we know how to focus a word, you say the word louder. How do you focus a phrase? And maybe what we're learning is a way to focus the whole phrase, "three cookies," is the same way you would focus "cookies." That is, you put stress on "cookies." You might have thought that the way to focus a phrase would be to say the whole phrase louder. So "I will give Mary a cake, I will give her a pie, and I will give her three cookies, too." You might have thought that you would have to do that. You don't. You can, maybe, but you don't have to do that. One of your options is to just emphasize part of the phrase, and now the theory, or the literature is off and running: which part of the phrase do you get to emphasize in order to focus the whole phrase? Because notice there are constraints. If I say, "I will give Mary three cookies, too," well. That's grammatical, but the only alternatives are other numbers. So "I will give Mary three cookies, too," means not only will I give her one cookie and two cookies, but I will also give her three cookies. It can mean that. But "I will give her a cake, and I will give her a pie, and I will also give her three cookies"-- that's peculiar. If I want to avoid being peculiar, I have a long way to go. But I can start. The first step on the long, long journey is to avoid saying things like that. Instead, I need to say-- sorry, I'm cracking myself up. What I need to say instead is something like "I will give her a cake, and I will give her a pie, and I will give her three cookies, too"-- to put the stress on "cookies." That's the normal way to focus the whole phrase "three cookies," is to put the stress on "cookies." So how come it's on the "cookies" and not on "three"? Maybe because the whole phrase is a noun phrase and "cookies" is the noun. Or maybe it's more complicated than that. There's all kinds of work now to figure out what's going on. So there's a phenomenon in focus projection which people get very excited about trying to figure out. What is it that lets you do this? Because it goes further than this-- So actually, let me destroy the theory that I just offered. Consider a sentence like "I will only talk about bats." Now imagine that the main thing that I emphasize is "bats." So that has one meaning, that's the easy meaning. "I will only talk about bats" means I will not talk about anything else. I will only talk about bats. But I think there are some other things it can mean. One of them, for example, is I can say "I will only talk about bats." I will not draw any disturbing drawings, or sing any weird songs. All I will do is talk about bats. I think it can mean that. So I can be contrasting the bats with other things, but I can also be contrasting the whole verb phrase, "talk about bats," with other verb phrases that I could be doing. Yeah? AUDIENCE: I feel like if you get that meaning, you would have to emphasize talk. "I will only talk about them. I won't sing about them, I won't draw them." NORVIN RICHARDS: Ah, that's a nice example. Let's do that. "I will only talk about bats." I think Faith is absolutely right. It's possible to say "I will only talk about bats." And I think the alternatives are the ones that you just outlined. It means I won't sing about bats, I won't compose poetry about bats, I won't dress up as a bat. All I will do is talk about bats. Yeah, I think that's right. But notice-- I think this one-- imagine that what I want to say is "I'm obsessed with bats, I'm sorry. And in fact, I-- but I'm going to do my best to behave normally at the party. So I'm not going to try to swim in the punch bowl, and I'm not going to scare the host's pets, and I'm not going to eat all the food. I'm only going to talk about bats, which is a little weird. But it's not as weird as some of the things that I could do." So if we're considering a bunch of verb phrases that have nothing to do with each other, I think one way to do it is by putting the emphasis on "bats," to say "I will only talk about bats." ("I won't try to eat the host's dog. I'll just talk about bats. Is that so bad? I like bats.") Yeah? I think this is true. Has everybody seen the video about the bats on the swing? Oh, OK. I'll try to find it and put it on the website. It has nothing to do with linguistics, but it's an awesome video. So it looks as though-- I think Faith is exactly right that if you put the stress on "talk," the alternatives are talk-- are to the verb. So I'll only talk about bats. I won't sing about bats, I won't do an interpretive dance about bats. But if I put stress on "bats," then we can consider alternatives to bats, but we can also consider alternatives to the whole verb phrase. And so this is the kind of thing, as I said, people who work on focus projection try to figure out what the heck are the rules for focus projection, because they're complicated. Yeah? AUDIENCE: Oh, it's just a question because I feel like my ability to interpret meanings has gone out the window. NORVIN RICHARDS: I understand. [LAUGHTER] AUDIENCE: What happens if you put the emphasis on "only?" NORVIN RICHARDS: On "only?" I will only talk about bats. I don't know, what does that mean? AUDIENCE: I won't do anything else. NORVIN RICHARDS: I won't do anything else. I will only talk about bats. AUDIENCE: I feel like it could call into question everything in the little brace. So you say "I will only talk about bats," and that could mean that you're saying as opposed to anything else about bats, anything else about-- or talk about any other-- NORVIN RICHARDS: Yeah. Yeah. Yeah. I think that might be true. I think that might be right. Yeah. More mysteries. Any other questions about bats, or "only," or focus projection? It's just a phenomenon that's out there. Yeah? AUDIENCE: Have we gotten across a sentence where stressing a particular word doesn't actually mean anything different? Because I feel you could stress any of the words in "I will only talk about bats." I will only talk about bats. NORVIN RICHARDS: That's true. So we've only talked about things that come after the "only." So I will only talk about bats. AUDIENCE: And not you or-- NORVIN RICHARDS: OK, first, let's forget about "only" for a second. If I say, I will talk about that, remember focus. So there's focus, which says let's consider all the alternatives to the thing that's focused. And now since we've talked about focus projection, we're bearing in mind the fact that what's focused is not necessarily what's loud. There's what's loud, which is contained in what's focused, and it might just be what's focused. But the thing that's loud could just be one part of the whole phrase that's focused. So forget about only for a second. If I say "I will talk about that," there's focus on "I," and I think I'm inviting-- out of the blue, I think I'm inviting you to entertain the possibility that all the alternatives are false. So if I say "I will talk about bats," I mean "I will talk about bats and for some reason, the idea that somebody else might talk about bats is salient and it will not happen." So for example, if you ask me "Is Faith going to talk about bats at the party?" I can say, "No, I will talk about bats," and that means I will talk about bats and the salient alternative gives you a false sentence. That's what that means. So I think when we say "I will only talk about bats," that's similarly something you can use in that kind of situation. So if you ask me, "Will Enrico only talk about bats at the party?" I say, "No, no. I will only talk about bats." And that means I am the person such that the only thing I'll talk about is bats. Enrico is more normal than me, like most people. Yeah, Kateryna? AUDIENCE: What if the more natural way to say that sentence is "only I?" NORVIN RICHARDS: So what's complicating this is that it's possible to get a meaning that's sort of like "only," without an "only." that's why I started with "I will talk about bats," which means I will talk about bats and the alternatives to me won't. Maybe if we do one of these other ones. So "I will even talk about bats" means several things. It can mean "I will talk about bats in addition to the many other things I will talk about." It can also mean "You wanted somebody to entertain everybody at the party. I will juggle, I will dance, I will even talk about bats. So I will do all the many talented things that I'm capable of doing. People will be agog. You'll be glad you invited me." It can mean that. So putting emphasis on "bats" like that can either invite you to consider the alternatives to bats, or the alternatives to talk about bats. Yeah. "I will even talk about bats," I think, can also mean that, right? That kind of thing. "I will even talk about bats" can mean "I will talk about bats, and so will all of the linguistics TAs. Various people will talk about bats, and I'm one." I will even talk about bats. John will talk about bats, Mary will talk about bats, I will even talk about bats. I think it can mean that. Faith? AUDIENCE: Aren't there even further divisions of variations of tone where, like, if you were to say "I will talk about bats." You can't say that if someone were like, oh yeah, you're going to talk about bats at this party. It just doesn't sound like an appropriate [INAUDIBLE].. NORVIN RICHARDS: "I will talk about bats." Yeah, so we actually-- did we touch on something related to this? I think we did. Yes, we did, because we were talking about pragmatics, and I was trying to get you to stop. Not you specifically, but the whole class en masse. You were all trying to force me to talk about pragmatics, and I was trying to stop you, and I don't know why I'm bringing this up now because we're about to do it again. And I was giving examples like if you ask me "How did people do on the test?" I can say, "Well, Mary passed." And then you get-- that means-- all I've told you is that Mary passed. And if we were all robots, you might have expected that all you would conclude from that is that Mary passed. So you start that conversation not knowing how people did on the test. That's why you asked me, "How did people do on the test?" I told you Mary passed, and now you know that Mary passed and you don't know anything else. You might have imagined that it would work that way, but it doesn't. So if you ask me "How did people do on the test? and I say, "Well, Mary passed," there are lots of possible interpretations, but one is Mary passed and no one else did. So you do draw conclusions about everybody else partly from the fact that I failed to answer your question. That's one of the kinds of things pragmatists talk about. But it's possible that the particular tune that I sing as I say "Mary passed," invites you to contrastively focus Mary. That you're saying-- I'm saying to you, in a sense, Mary passed, and all the alternatives to Mary, notice that I haven't told you anything about them, draw your own conclusions. That's what I'm doing. Sometimes called a contrastive topic. And I think that's connected with your example as well, maybe, or something similar going on. No? As you might tell, there's a huge can of worms right here, this work on-- a really fascinating can. I shouldn't call it a can of worms because that makes it sound disgusting. It's fascinating, really, really interesting. There's all this interesting work on intonation, the kind of thing Faith is asking about, and the semantics of these kinds of expressions. So how do you connect the games you get to play with the pitch of your voice and your loudness with other sentences that I want you to think about that. I am not, in fact, saying but I'm communicating something to you about them. This is all fascinating, difficult stuff that people work on. Cool. All right, we're back from bats to cookies. Any questions about cookies? Bats? Only even? OK. Yes, association with focus. And then this is reminding ourselves that there are also sentences where you just focus something. "I will give Mary three cookies" used when alternatives to Mary are salient, like "Who will you give three cookies to?" I'll give Mary three cookies. Or some of you just said, "Are you going to give John three cookies?" And I say, "No, I will give Mary three cookies." Or when you're making lists, like "I'll give Susan three cookies, I'll give John three cookies, and I'll give Mary three cookies." So I'm inviting you to consider various alternatives to Mary. That's the point of putting this extra focus on these words. And then languages vary a lot, actually, with respect to how they realize focus. So there are languages like English that do it with gymnastics of the vocal tract. Your voice gets louder and softer, and higher and lower in pitch. There are other languages that tend to move the words around. So apparently, at least in some dialects of Spanish, a standard response to "Who bought the newspaper yesterday?" is literally "Yesterday bought the newspaper Juan." That is, you put the answer to the wh-question at the end. That's a normal way to answer these kinds of questions, although that's not the default Spanish word order. Or similarly, in Tagalog, if I want to say, "I'll only eat balut," you can't literally say that. So the last sentence on the slide is the attempt to translate word-for-word I will only eat balut, [NON-ENGLISH] I was doing some work with Tagalog speakers not too long ago trying to get them to accept this last sentence, and they were like, no, no, you can't say that. You have to say the first sentence, which is something literally like, "Balut is the only thing I will eat." So Tagalog speakers can't just say balut really loud and have that be the thing that associates with focus. They have to put it somewhere else at the beginning of the sentence in order to get it to mean things. Balut, if you remember, is this Filipino delicacy that involves a fertilized duck's egg, which is hard boiled before it hatches. Good source of protein, I'm told, though I've never had one. And then there are also languages-- this is Chickasaw, which is a Muskogean language spoken in Oklahoma, I think. It used to be spoken around Alabama, but I think they were moved, in which there are morphemes that you put on nouns that say this is the focus. We are just about out of time. There's one other topic on here, and I think we don't have time for the other topics. So I think we should stop here. Are there any questions about any of this, association with focus? So please ask me any questions you have left about semantics because I refuse to teach you anything about semantics from now on. We will never talk about it again. That's probably a lie. I mean, I was telling you about syntax, and phonology, and morphology today, and those are things we thought we had left behind, too. It all kind of blends together. Any last-minute questions about semantics? All right, go out and enjoy the day, and we'll see you guys on Tuesday. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_14_Syntax_Part_4.txt | [SQUEAK] [RUSTLING] [CLICKING] NORVIN RICHARDS: Welcome back. Hope everybody had a restful break, spent lots of time doing syntax in their spare time. Let's do a little syntax now. I don't have any announcements. Are there any questions about the things that I'm not announcing, things that people are, like, wondering about, where we start? OK. So I just wanted to start with some review, just to remind everybody where we were. We were drawing trees like this one. So here's a tree for "I will tickle the child with the feather." It's a tree that has various kinds of information in it about, well, various things, including substrings that we think are treated as constituents for various kinds of syntactic phenomena. So this tree is meant to reflect the fact that this sentence, first of all, has a meaning in which I'm going to use a feather to tickle a child. That's the meaning that's been diagrammed here. And that if you perform certain operations that make it clear that this is the tree that you've got, like, topicalizing "the child," moving "the child" to the beginning of the sentence, "The child I will tickle with the feather," that's the only-- in fact, the only meaning that you can have is the meaning that goes with this tree. So is this all sounds vaguely familiar? Are there questions about this tree? Is anybody looking at this tree and thinking, wait, which class am I in again? What is going on? Why? I'm going to talk about this tree in a little bit of detail today. But I just want to make sure there's nothing here that's shocking or distressing anybody. So one of the things we've talked about is that some of the properties of that tree follow from selection. So we have this idea that there's a special kind of relation that can hold between heads, so the smallest things in the tree, the things that just have words under them, things like the verb, a relation that can hold between those heads and other kinds of phrases in the sentence. So we've said that the verb "tickle" selects the object noun phrase, "the child." And what we mean by that is that the relation between the verb "tickle" and the object, "the child," is special in that not every verb can be followed by an object. So you can tickle a child. You can devour a child. You shouldn't. But it's grammatical. You can write a child. But there are other verbs for which you just can't do this. You can't "thrive the child." That doesn't make any sense. So there are some verbs-- classic way to say this is there are some verbs that are transitive and others that are intransitive. So in order to know whether you can have an object or not, whether there can be a noun phrase in that position or not, the sister of the verb, you've got to know what the verb is. The verb tells you whether it needs to have an object or not. So there's this relation of selection that holds between the verb and the object. The idea is when we look up words like "tickle" and "devour" and "write" in our mental lexicon, we'll see various things about them, including how they're pronounced, but also whether they can have objects or not, whether they select for noun phrase sisters or not. So when we ask, going back to the tree, why is the sister of the verb, the noun phrase "the child," one way of interpreting that question, the answer is well, it's because the verb "tickle" selects for a noun phrase sister. And so you give it one. Now, that's why the tree has that bit of structure. And we can tell that kind of story about a lot of parts of this tree. So similarly, we're going to want to say that the preposition "with" selects for a sister that's a noun phrase, "the feather." And for that matter, we'll probably want to say that the T, "will," is selecting for a verb phrase that's why the T has a verb phrase, it's its sister. So there are various places in this tree where a head has a particular sister. And it's because the head is selecting for that as its sister. We also said that this relation of selection that is something that heads can do-- they can select for properties of their sister-- that sometimes, at least, they select more specifically than that. They select for sisters where the sister's head has a particular property. So I think the example I gave you last time was the verb depends-- where the verb depends needs a prepositional phrase sister, but it specifically needs a prepositional phrase sister in which the preposition is "on." So you have to depend on things. You can't depend from them or depend at them. You can't just have any preposition there. It has to be "on." So there are verbs like "depend--" and there are many verbs like this-- that select for a particular prepositional phrase-- sisters, but in particular, prepositional phrase sisters that have a particular preposition. Sometimes it's less particular than that. So I think in class I said-- (he said, vamping until he could get some chalk out)-- that the verb "put" seems to select for a noun phrase sister and prepositional phrase sister. So you put the book "on the table" or "in the refrigerator" or "under the car" or whatever, wherever it is you want to put the book. So the "put" has to have a noun phrase object, doesn't have to be the book. It can be any noun phrase. And then there can be a prepositional phrase. "Put" is not like "depend." There are a variety of prepositions that can be here. But you can't just have any preposition here. So you can't say "Put the book during the party." That's not a possible sentence. And we might hope to derive that from properties of the meaning of "put," right? So the prepositional phrase that goes with "put" needs to specify a location, right? And "During the party" doesn't specify a location unless we're thinking of time as space, which as long as syntax is not Doctor Who, I think we're OK. Yeah? AUDIENCE: So for clarity, because Merge only is binary-- NORVIN RICHARDS: Oh, dear. [LAUGHS] AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yeah. So Merge is only binary. And that means if we're going to have a tree for these kinds of sentences, we're going to need to put the verb "put." And since we want the noun phrase to be next-- that's what this seems to tell us-- we'll have a noun phrase here. And then we'll have a prepositional phrase That's one possible answer to your question. If Merge is going to be binary-- if Merge is going to be binary, we're going to want to say, yeah, "put" is something that needs two things. We're going to have to give them to it one at a time. We can't give them both to it at once. Otherwise, if we wanted to do that, we would need to use ternary Merge, right? We'd need to merge three things at once. And so we'll merge them one at a time. We'll first give it the noun phrase. And then we'll give it the prepositional phrase. Why do we first give it the noun phrase and then the prepositional phrase? Why don't we say, "Put on the table the book"? We might actually get to that. But we're not going to get to it today. So this is one possible answer to your question. Another possible answer to your question, of course, would be to say, ah, we've discovered that we were wrong to say that there's binary Merge, right? That here's a place where we need there to be ternary merge. This is going to interact with things we're going to want to say about how selection works. So I have now a couple of times said a thing gets to select for the properties of its sister, right? And here it's selecting for properties of its sister. But this is not its sister, right? This is something else. It's-- never mind. I can't figure out the family relation between these two points on the tree. It's-- what is it? It's its aunt, yeah. The prepositional phrase is the aunt of the verb. That's not how actual syntacticians talk about this kind of thing. So they're not sisters. So two kinds of things we can say. Oh, what we're discovering is that we need ternary Merge. We need these to both be sisters. Another would be to say, oh, we're discovering that selection isn't necessarily for properties of your sister. It's something more like you select things. The first things that you merge must be things that you select. We're going to circle around and talk about that in a second. So to say something more elaborate about how you select. So that's a really good question. What we really need to do is develop tests that will tell us whether this is the right tree for something like this, or whether we would rather have something like this, where there's a verb and a noun phrase and prepositional phrase, and Merge doesn't have to be binary. Just to cheat a little bit, what we would find out when we develop these kinds of tests is that this is not the kind of tree that we want. We want trees that are like this. But we're going to have to do some work to develop those tests. Raquel? AUDIENCE: I have a scary thought. NORVIN RICHARDS: Oh, no. AUDIENCE: The thought is like-- so about selected-- NORVIN RICHARDS: Yeah. AUDIENCE: I feel like there are some verbs, like, you say "board." Like you board a ship. You kind of have, like, a secret selection inside of it because you could think of "board" as like get on the ship. NORVIN RICHARDS: Oh, I see. AUDIENCE: And so you'd be like, ah, "get" is selecting for on the ship, a certain type of prepositional phrase. NORVIN RICHARDS: Mm-hmm. AUDIENCE: Or you could have just "board," which kind of includes it inside of it. Like, also in Spanish, if you have words like "buscar," which is, like, "to look for," and so it's, like, could you think of "put" as a verb that maybe-- like if English had been different, you could have a verb like "You put the refrigerator." Everyone knows that you mean "put in the refrigerator." NORVIN RICHARDS: Yeah. I see what you mean. So you're pointing out something important, which is that-- here, let's do "look for." But in English, we're going to want to have a verb "look" that selects for a prepositional phrase, where the preposition is "for." But that there are other languages like Spanish in which there's a single verb that means "look for." For that matter, in English, there's a verb "seek." So it's kind of dramatic. But if I go into a bookstore and say, "I'm looking for a book," that's the normal thing for me to say. I could also go into a bookstore and say "I am seeking a book." They would probably direct me to the fantasy, science fiction section if I did that. But those are both two ways to say the same thing. So maybe the way to think about this is, yeah, Spanish doesn't have a verb that means "look for." It has a verb that means "seek." And-- there. Yeah. Or maybe another way to say this-- and this is going to be important as you guys are doing your work with the languages that you're working on-- is that you shouldn't necessarily expect the selectional properties of a verb to be the same from one language to the next. That if you are inclined to think of Spanish-- is it "buscar"?-- as the equivalent of "look for," that we want to think of that as-- as-- that English has a verb "look" that selects for a prepositional phrase. Spanish has a verb "buscar" that's like "seek." It selects for a noun phrase complement. Yeah. So you want to be careful as you are going from language to language. You can't trust selectional restrictions. And indeed, so I've used "put" as the example of a verb that has two things that it seems to select. But there are probably examples of verbs that select two things, where the two things are, say, both noun phrases. So take a verb like "give the children books." So here we want there to be two noun phrases coming together with "give." And "give" selects those. It's a property of "give" that it can have two noun phrases after it. And not every verb can do that. "Put," for example, can't. So even verbs that are fairly similar in their meaning, like "donate"-- so you can't "donate the children books." You have to donate books to the children. So yes, as so often in this class, I'm showing you the easiest examples here. And as so often in this class, you guys are immediately saying, wait, wait, what about the complicated examples? So, yes, there are more complicated examples. We'll try to get to them, cases where you're selecting more than one thing. We'll come back to that. We'll talk about it more now. Yes? AUDIENCE: So for give the children books, how would you continue that? [INAUDIBLE]? NORVIN RICHARDS: So if we use the technology that we have right now, we're kind of compelled to have a tree like this one. I wrote this as capital N because I knew I was about to write a noun phrase. "Give the children books." So where "give" needs to-- to combine with two noun phrases. And Merge is binary. And so we'll do binary merge twice. That's a kind of tree that we could draw with the techniques that we have right now. I hope I hedged enough in there to make you feel emotionally grounded enough that if it turns out later that that's not the right tree, that you won't be too disappointed because we're going to talk more about this stuff. And our understanding will get more sophisticated. But given everything I'm telling you right now, that's the tree we expect. Yes? AUDIENCE: Do these trees have to have the words in sentence? NORVIN RICHARDS: I'm sorry. I missed the middle of that. AUDIENCE: Do these trees have to have exactly each and every word in each sentence? Or could it fill in some words? For example, I learned in elementary school the subject of every command is "you." NORVIN RICHARDS: Oh, mm-hmm. AUDIENCE: Could you have "you" in parentheses in one of these trees? NORVIN RICHARDS: Oh, I see. AUDIENCE: "Give (to) the children books." NORVIN RICHARDS: Yeah. Oh, I see. Oh. That's a really interesting and difficult question. The short answer is yes. There absolutely are places where we're going to want to say I'm showing you trees in which I'm not showing you trees anymore. I'm showing you a nebula. Give me just a second. I'm showing you trees in which every terminal node, every head has one word under it. And you're wondering, could there be trees where there are nodes that don't have anything under them or don't have anything pronounced under them? And indeed the short answer is yes, there surely are going to be such trees. Your example is a good one. Do I want to go off and talk about that? Yeah, let's talk about that for just a second. We talked a little bit before-- we're going to come back to this later. But we talked a little bit before about-- yeah, we can do this fast-- constraints on things that pronouns and names can refer to. So for example, we talked about the fact that if you say, "She likes Sally," that's a perfectly fine sentence. But it can't mean that she and Sally are the same person, right? So "she" has to refer to somebody other than Sally. And we developed a rule that has that consequence. And I promised you that we would talk about that rule in more depth later on. Here's another kind of case. If you say something like "Sally defended herself," there's this special expression "herself." And it has to refer to Sally, right? So "Sally defended herself" means Sally defended Sally. So "herself" refers back to whoever the subject is. If this had been "Mary defended herself," well, it would have to be Mary defending Mary. If it had been, "I defended myself," well, then you have to use this special form of these words, these "-self" words. They're called reflexives. These reflexives have to refer back to something else in the sentence. In these examples, they always refer to the subject. And they have to do that, right? So you can't say things like, "I defended herself" or "himself," right? These are out, yeah? So the reflexive that's at the end of these sentences has to refer back to the subject in these cases. And the form of the reflexive tells you something about what the subject is. So you get, it's got-- so-- hmm. You get "herself" when the subject is feminine and third person. And you got "myself" when the subject is first person. And if it were "you defended," it would be "defended yourself," yeah? So this reflexive is telling you that it's referring back to "you," yeah? This means you defended you, yeah. I'm telling you all this because your example from elementary school is a really good one. If I tell you, defend yourself, it's a command. Well, everything that we've now learned about these reflexes leads us to hope that there's a "you" in that sentence because, well, that's what reflexives seemed to do. They refer back to the subject. And they have a form that tells you something about what the subject is. In this case, they tell you that the subject is "you," which is what you were taught in elementary school, yeah. So you say "Defend yourself!" And that's fine. And you cannot say "Defend himself!" or "Defend myself!"-- right? Just as you cannot say you defended-- you can say you defended yourself, but not "You defended myself" or "himself" or "herself," yeah? So indeed, there are some good reasons to think that whoever told you that there's a "you" in this sentence was right. You can detect the "you" by using these reflexives, yeah. So there are, indeed, words that are not pronounced. And if we were going to draw a tree for defend yourself, we'd probably want to say, yeah, there's a "you" in subject position. And there would have to be a rule saying don't pronounce that particular "you." So the short answer to your question is, yes, there are parts of trees that are not pronounced. And we may get a chance to talk more about. All right. That was a tangent. Any other questions? Take us back to this main line here. Where are we? Yeah, so selection. Heads select for properties of their sisters, although several of you want to know what happens when they seem to have more than one thing that they select. And I'm trying to-- well, I'm not exactly ruthlessly suppressing that question. But I'm telling you what we currently seem to have to say about it and maybe flagging the possibility that we'll have to say different things later. So far so good? So, yeah, right. And when selection is for something specific-- so this was the point I was trying to make with this slide. When a verb selects for something specific, a verb can, for example, say I need a prepositional phrase. And I need the preposition to be "on." Or I need a prepositional phrase-- this is like "put." I need a prepositional phrase. And I need the preposition to be something locative. It needs to describe a location, yeah. So "depend" selects for a prepositional phrase with the head "on." We're never going to find, for example, a verb that selects for a prepositional phrase-- and I don't care what the preposition is, but the object must be "tomatoes." We won't find anything like that. So verbs select for their sisters-- sisters in all the cases I'm going to show you. They select for their sisters. But what they select specifically is sisters with a particular head. They don't say things like, I select for a sister whose complement must be this or which must contain a tomato or which must be modified by an adverb. Heads don't do things like that. They select for properties of their sisters, specifically the heads of their sisters. And once we know that-- this is what I said in class last time-- we can use that fact about selection to detect other heads, so other kinds of things that seem to be in this selection relation with other words in the sentence, other heads in the sentence. So when we see other cases where there's a particular word in the sentence whose value seems to be determined by another word in the sentence, we get to think, oh, OK, maybe that's a selection relation. This was the example I gave you last time. Verbs seem to be able to select for properties of the clause that follows them. So there are verbs like "think" that can be followed by a clause. Not every verb can be followed by a clause, right? So "I devoured the pizza." "Devour" can be followed by a noun phrase. But "I devoured that I have won the lottery" doesn't make sense. So this looks like a selection relation. And specifically, it's a selection relation that's picky about these words that introduce the clause that come at the beginning of the clause, words like "that" and "whether." So "think" can be followed by a clause where the word that introduces the clauses is "that." You can say things like, "I think that I have won the lottery." "Wonder" can be followed by a clause where the word that introduces the following clause is "whether," as in, "I wonder whether I have won the lottery." And these verbs can't be switched. So you can say "I think that I have won the lottery." You cannot say "I think whether I have won the lottery." You can say "I wonder whether I have won the lottery." You cannot say "I wonder that I have won the lottery." And I was saying this looks kind of like "depend" and "on". "Think" needs a clause that starts with "that." "Wonder" needs a clause that starts with "whether." "Depend" needs a prepositional phrase that has "on" in it. And just like with "depend" and "on," we said, yeah, "depend" is selecting for a prepositional phrase with a particular head. We get to say, oh, OK, apparently "that" and "whether" are the heads of the clause that's getting selected by the verb. So "think" and "wonder" are selecting for clauses that are headed by words like "that" and "whether." We have a word for words like "that" and "whether"-- we call them "complementizers." I think I apologized for that word last time. I'll just apologize for it again because it's probably not the word you learned for those things in elementary school or high school. You might have heard them called conjunctions, so "subordinating conjunctions," lots of things people call them. But in linguistics, we call them complementizers. So I'll have to ask you to join me in calling them that. So the abbreviation for complementizer is C. And the phrase is the sister of verbs like "think" and "wonder" is a complementizer phrase, a CP. It's a phrase that's headed by "that" or "whether," these complementizers. So I'll review. Hopefully it sounds at least vaguely familiar. So now here we are back at the original tree. "I will tickle the child with the feather." We've talked a lot about selection relations. We've said that selection relations are relations between heads and phrases, many cases, the sister of the head. And in particular, selection relations seem to be picky about when there is pickiness about the head of the phrase that's getting selected. So we've seen examples of that now with prepositional phrases and now with Complementizer Phrases, CPs, not clauses. But-- and this is near the end of what we did last time-- there is more to life than selection, not much more, but a little more. So we don't want to say that "tickle" or "child" is selecting "with a feather," right? So when I told you that tickle is selecting the child, one of the ways I tried to convince you of that was to say, yeah, not every verb can be followed by an object. There are transitive verbs and intransitive verbs. That's the kind of thing that makes us think we're looking at a selection relation. But any verb can be followed by "with the feather." You can do anything with a feather. You can tickle a child with a feather. Or you can devour a child with a feather. You can write a novel with a feather. Or you can thrive with a feather. Feathers, they're just very, very flexible tools. You can do anything with them. Again, some of these things make more sense than others. So it's not clear what you mean when you say that you will thrive with a feather. Or for that matter-- well, it's actually fairly clear what you mean when you say you'll devour a child with a feather. It just doesn't bear thinking about too closely. But basically you can do anything with a feather. There's nothing like verbs are transitive and intransitive. So this particular prepositional phrase, this sort of instrument, can go together with anything. And so we don't seem to want to say that there is a selection relation between this prepositional phrase and anything at all. So there's a distinction that we draw between what are called arguments and what are called adjuncts. So arguments, like "the child" in "I will tickle the child with a feather," are things that are selected by something, in this case, the verb. And the adjuncts are just phrases that wandered in. They're not selected by anything. They're just there because they want to be. They're not selected by anything. You get to put them in because you feel like it. They're often modifiers sort of like this one. So there are arguments. And then there are adjuncts. How do you tell whether something is an argument or an adjunct? I think I warned you last time this is something that people often get confused by. And so I want to be clear about it. Again, when we decided that "the child" is selected by "tickle," what we were doing was saying not every verb can be followed by an object. So if you're going to have a child right after the verb, you have to know what verb it is. There are transitive verbs and intransitive verbs. And some verbs are OK with the following object, and others or not. That's the signature of a selection relation. So the arguments are picky about which heads they can combine with, and adjuncts are not. What people sometimes get confused about is thinking that it's the opposite, that the pickiness goes in the opposite direction. So people think here's a verb that needs to have an object. And I've probably said things that led you down this garden path. So there are indeed verbs that are obligatorily transitive. I think the example I gave before was "devour." So "The dragon devoured pizza"--- because we've devoured enough children for one class, possibly too many-- "The dragon devoured the pizza." "Devour" actually has to combine with an object. Not only is this a selection relation because not just any verb can have an object, but actually, "devour" has to combine with an object. It's ungrammatical without the pizza. So if I take out the pizza, it becomes star, yeah? So "The dragon devoured," that's not a sentence. So that's an argument, which is actually obligatory. "Devour" has to be transitive. But there are plenty of verbs that-- in fact, it's more common for verbs to be optionally transitive. So "devour" needs to have an object. But "eat," which means almost the same thing as "devour," the transitivity is optional. You can say "I ate an apple." But you can also say "I ate." Obligatorily transitive verbs are actually kind of rare. And the best ones are kind of violent. They are things like "eviscerate" and "devour" and "mutilate." Those are the really clear transitive verbs. They're-- very handy in syntax classes. Yes? AUDIENCE: So are we suggesting the fact that since [INAUDIBLE] property of a feather, or is that any adjunct can modify? NORVIN RICHARDS: We're talking as though any adjunct can modify anything. And so this is another one of those places where we have to carefully distinguish between syntax and semantics. There are surely things that it's very difficult to imagine anybody doing with a feather, right? But those kinds of cases-- so what's an example? "She proved Fermat's last theorem with a feather," right? I don't know how she would do that, right? But that's a math problem. That's not a syntax problem. So the sentence is grammatical. We just can't figure out-- I wouldn't be able to draw a picture of it. Does that make sense? So, yeah, what we're saying is that that prepositional phrase can combine with anything in principle. Though there may be kinds of combinations that will give us meanings that are kind of hard to understand. Yeah, Joseph? AUDIENCE: [INAUDIBLE] stylistically common that violent words-- NORVIN RICHARDS: --are the most transitive ones? I think it is, yeah. Yeah, there's something about violence and transitivity, yeah. Something to think about as you're working on your languages, if you want to. So arguments are picky about what heads they combined with. But there are-- to look at things from the other perspective, if you're asking for a particular head, if you see a head and it's followed by a phrase, the question you ask yourself is not "Is that phrase optional?" People often ask that question. And it's a question that gets you in the wrong direction. There are plenty of examples of things that are, in fact, arguments, but they're optional. And then I think this is the last thing we did last time. I was trying to illustrate arguments and adjuncts to you. And we were in the middle of deciding on the boat. So let's decide on the boat again, fairly quickly. So what we decided was that the sentence "I decided on the boat" can mean at least two things. It can mean either I had some decision to make, and I made it while I was on the boat, right? Or it could mean I chose the boat, right? I was trying to choose between the boat and something else. And I chose the boat. And what we said was, if we ask, "The prepositional phrase 'on the boat,' is that an argument or an adjunct?" the answer to that question is yes. It can be an adjunct. "I made my decision" when it has the-- when the sentence means "I made my decision while I was on the boat," then "on the boat" is an adjunct. You can kind of see why. Just like you can do anything with a feather, you can do anything on a boat, right? So any phrase you can modify it with "on the boat." This particular prepositional phrase, then, is an adjunct in that reading, yeah? Have to be careful because, of course, if I say things like "It depends on the boat," or for that-- well, let's use that one, it depends on the boat. This is a case where there is a selection relation between "depend" and "on," right? We've been talking about that before, that "depend" select for prepositional phrase for which the head is "on," yeah? So you can't just look at a prepositional-- if someone asks you out of the blue, is "on the boat" an argument or an adjunct-- in a way, that's the point of this slide too-- the answer is, wait, what's the rest of the sentence, right? Because you have to find out what the relationship is between that prepositional phrase and the other words that are around it. So in the reading where "I decided on the boat" means "I made my decision while I was on the boat," "on the boat" is an adjunct. It's just describing the location where the rest of the sentence took place. And any sentence can take place on the boat, maybe. As opposed to the reading where it means "I chose the boat," there it's an argument because, again, just the fact that "decide on" means choose is kind of an idiosyncratic property of the verb "decide" and the preposition "on." It's like if you look up "decide" in the lexicon, it's going to have there, we're going to have to list the information, that it can optionally select an argument that starts with "on" and that the result of that combination means "choose." If you think about other things that mean something pretty similar to "decide," like "make up your mind," so "I made up my mind on the boat" only means the first thing, I think, right? "I made up my mind on the boat" means "I made my decision while I was on the boat." It doesn't mean "I chose the boat," right? I made a choice between the boat and something else. And I chose the boat, right? So it's not a property of expressions that mean "decide" that they always combined with "on" to mean this. It's a idiosyncratic property of "decide." It's kind of like we have to list for particular verbs that they're transitive or that they're not. We have to list for the word "decide" that it combines with "on" to give you a meaning with something like choose. I'm sorry. I'm talking over you. AUDIENCE: Is this the same effect as "look," where you "look up"? NORVIN RICHARDS: Yes. Yeah, that's another good example. So you can "look up a reference." So you can "look up." And that "up" is probably an adjunct. You can do anything up. But to "look up a reference," the fact that that means "look through some stuff and find it," that's an idiosyncratic fact about looking up. That's another good example of an argument. Yeah, nice example. Yeah? OK. And then-- I think this is where we stopped-- I asked you to think about sentences like "I decided on the boat on the plane." And I asked you to suppress your natural urge to make things more complicated and to think of alternatives. So consider situations in which there's only one boat and only one plane and neither of them is on the other. There are two things that you could imagine this meaning. It could mean I chose the boat while I was on the plane. Or it could mean, in principle, I chose the plane while I was on the boat, if we've decided that "decide" and "on" can combine in these ways. But I think the fact generally was that it only has one of those readings. Is that the reading that people have? I'll write the readings down. One is chose the boat while on the plane. And the other was chose the plane while I was on the boat. Yeah? How many of you can get it to mean this first thing? How many of you can get it to mean the second thing? A few of you. The few of you who can get it to mean the second thing, please meet with your classmates after class and get them to convince you, or maybe you can convince them that they're wrong. I think we talked about this a little bit in class, possibly, that, for me, at least-- and people should stop me if this is not true-- I can get it to mean the second thing, but I need an opera score. I have to say it in a particular way. I have to say something like "I decided, on the boat, on the plane." There have to be commas. And I have to kind of squash "on the boat" a little bit in order to get it to have this second meaning. Those of you who were considering raising your hands the second time, is that true of you? Or are you people who can just say, "I decided on the boat on the plane," and mean this? Some of you are raising thumbs and nodding and things like that. Is there anybody who-- that thing that I just said when I pronounced it in that particular way, is there anybody who did not think, oh, yeah, that's the way I was pronouncing it in my head when I said that I could have it be this way? Let me say it again. Is there anybody who, if you say, "I decided on the boat on the plane"-- saying it that way-- can it mean this, the second thing? Yes? AUDIENCE: I feel like that way, the same sentence is more neutral than suggesting. I still have to say it in a different way to get the second meaning, but when I say like that, I don't immediately get the part [INAUDIBLE].. NORVIN RICHARDS: Oh. Oh, I see. How do you have to say it in order to get the second reading? AUDIENCE: I absolutely didn't think about that. NORVIN RICHARDS: Oh, OK. Fair enough. Fair enough. This is one of many places where I think I'm going to be content with the fact that all of you raised your hands when I asked whether it could mean this and only a few of you raised your hands when I asked whether it could mean this. I think maybe the ones of you who can get it to mean this are doing it by pronouncing it in a especially interesting way, which is itself an interesting fact. There is-- I think I have alluded to this a couple of times-- a topic of study that people work on, trying to figure out what the rules are for how the pitch of your voice rises and falls as you speak and where you put in pauses and so on. It's the study of what's sometimes called prosody. It's very complicated and hard, but it's really interesting. So maybe the short thing for me to say is, on the prosody where there's nothing special, where we're not doing any pausing or downgrading of anything, where we're just reading this straight through, I think everybody prefers this to this. Let me say that for the time being. So "I decided on the boat on the plane." When you've got these two "on" phrases after "decide," the sort of tendency anyway, unless you do fancy prosodic things, is to have the first one be the argument and then the second one be the adjunct. So what we're finding out, then, is that if you've got a head and it has both an argument and an adjunct, the argument is closer to the head. We're going to come back to that. But a way to think about it-- and we'll come back to it and say this more formally in a second-- it's as though-- I think I have a tree that I can make use of here. It's as though, if you're going to have a verb like "decide" that's going to have two prepositional phrases after it-- and forget what these are for now, and also we'll have "on the boat" and "on the plane." So you've got "decide." And it's going to have two prepositional phrases after it. And "decide" optionally selects for a prepositional phrase with "on" as the head. And what we're seeing is it's as though-- if you pick the version of "decide" that's like-- if you decide to have "decide" select for something, it's as though, when you're merging these two prepositional phrases to projections of the verb, you need to merge the one that's selected first. There's some kind of urgency about this selection relation. You don't just get to freely decide what order to merge these two prepositional phrases in, the argument and the adjunct. You need to hurry up and give it its selection requirements. Something like that. And we'll come back and talk more about that. Incidentally-- well, who cares? Incidentally, that way of talking about things-- no, sorry. Let me wait until we get more formal about it to make the next point. The next point is that this may help us some with the kinds of cases that got me to write that tree in the first place, the kinds of cases where a head seems to select for more than one thing. So we'll come back to that. So that's where we were last time, I think. [LAUGHS] Does anybody have any questions about any of that? This has been an attempt to review because I know you've had a week during which you might not have spent every waking moment thinking about syntax. I don't know why, but I guess young people today, they have their priorities. Are there questions about this? Are there things that aren't clear? OK. Yeah? AUDIENCE: I guess it seems to me that "decide" and "decide on" are almost like two different verbs. NORVIN RICHARDS: Yeah. AUDIENCE: So why couldn't you merge "decide" with "on" and have that be a verb phrase and the argument of that is [INAUDIBLE]? NORVIN RICHARDS: Yeah. Yeah. Yeah. So you're wondering for "decide on the boat," why not do "decide on" and then combine that whole thing with a noun phrase, "the boat," something like that? AUDIENCE: Yes. NORVIN RICHARDS: Yeah. Notice that these are making different claims about whether "on the boat" is a constituent or not. And so, for example, one of the constituency tests that we fooled around with was the one where I'm amazed by something that you've said, and I repeat part of what you've said in amazement. So if you say, "I decided on the boat," one thing I can surely say is, "The boat?" But I think I can also say, "On the boat?" AUDIENCE: Does that switch "up the reference"? NORVIN RICHARDS: So I don't think so, right? "Up the reference"-- that's a good example. "Look up the reference." If I'm amazed when you say, "I looked up the reference," if I say, "Up the reference?" that's weird. I have a lifetime's experience being weird. And I think that's worse than "I decided on the boat," "On the boat?" I think. But, yeah, that's the claim. Notice there's another difference between "look up the reference" and "decide on the boat." You can also say, "look the reference up." You cannot say "decide the boat on." So we probably don't want these to be the same. It's another reason to think that they're different. If you're familiar with German, German has a zillion particles like this, where you're putting things between the verb and the separable prefix, they're called. Yeah? OK. So now, hopefully I've got you willing to entertain, at least temporarily, the possibility that there are things that we can describe as arguments and adjuncts. Arguments are the things that are selected by other heads. Adjuncts are phrases that just wandered in by mistake. Nothing is selecting them. They're just here because they want to be. They're not here because they're satisfying any needs of anything at all. They're parasites, basically, is what they are. They end up in here inside the clause. And I've shown you one kind of test for them, which is that if you have a head that is combining both with an argument and with an adjunct, it looks like there's at least a tendency for the argument to want to be closer to the head than the adjunct. So "decide on the boat on the plane," the argument-y reading, the choose reading, goes with the first one. That means chose the boat on the plane. I want to show you another test for arguments versus adjuncts. Here's another sentence. "I decided on the boat, and Mary decided on the plane." That can mean at least two things, I think-- possibly four, but at least two. One thing that it can mean is I chose the boat, and Mary chose the plane. Another thing it can mean is Mary and I both had decisions to make. We had to decide whether to major in linguistics or not. And I made my decision while I was in the boat, and she made hers while she was on the plane. So it could mean that, too. I guess it could also mean I chose the boat, and Mary decided to become a linguistics major while she was on the plane? Maybe? I'm not so sure about that. But anyway, it can mean at least those two things. Now consider this sentence. "I decided on the boat, and Mary did so on the plane." Does that mean two things? I don't think so. What does that one mean? Joseph? AUDIENCE: That's the example. I decided to become a linguistics major while I was on the boat. And Mary chose not to, but she [INAUDIBLE].. NORVIN RICHARDS: Yeah, exactly. So I made the right decision on the boat, and Mary made the wrong decision on the plane. Yeah, it can mean that. Yes. So there's a phenomenon called VP-pronominalization. That is you're taking the VP "decide," and you're replacing it with "do so," which is kind of like a pronoun. It's like a pronoun in that it's an expression that can stand in for lots of different kinds of verb phrases. So just like a pronoun like "she" can stand in for any female person, "do so" can stand in for lots of kinds of verb phrases. And what we're learning is that if you do that, the phrase that's outside of "do so"-- "on the plane"-- can only be an adjunct. It can't be an argument. And there are various ways we could think about this, but here's one. We could say, look, this is the argument. This is the adjunct. We said there's a special relation of selection between "decide" and the argument that doesn't hold between "decide" and the adjunct. The adjunct is just here because it wants to be. So there's a selection relation between these two things. And then the adjunct is just something you merge because, well, you've got it, and you can merge it. That's what we've been saying. And maybe what we're learning here is that if you look up "decide" in the lexicon, you're going to see that it has the option of selecting for a prepositional phrase headed by "on." But if you look up "do so" in the lexicon, well, it doesn't select anything. There aren't any selectional relations with "do so." So "do so" is kind of blank. It just means there's a verb phrase, and look around at the rest of the sentence to figure out what the verb phrase is. But there's a verb phrase. So it's not doing any selecting. So if you see a prepositional phrase after "do so," it's not an argument of anything. It had better be an adjunct. I guess that's what we're learning here. As I told you, this is another test for arguments and adjuncts. If you do VP-pronominalization-- if you replace the verb phrase with "do so" and you have a phrase left over, then that thing had better not be an argument. It had better be an adjunct. The action with arguments and adjuncts so far has been with prepositional phrases. We've talked as though direct objects, for example, are always arguments. And so what we expect is that a direct object will never be able to be left over like this. That is, you won't be able to say, "I chose the boat, and Mary did so the plane," because "do so" doesn't select for anything. Direct objects are always selected for. You get them just if the verb is a transitive verb. And "do so" isn't a transitive verb. It's almost not a verb at all. Does that accord with people's intuitions, that you can't say that? Yeah. OK. In fact, I'm going to star it before anybody gets the wrong idea. Bad sentence. Bad sentence. No biscuit! Yeah? OK. All right, so-- yeah? AUDIENCE: You could say, "I chose the boat and Mary the plane." NORVIN RICHARDS: Yes, good example. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yeah. So, "I chose a boat and Mary a plane," or "I decided on a boat and Mary on a plane." Can you say that? "I live in Somerville and Mary in Allston." You can have prepositional phrases in this construction, but maybe not selected ones. Ugh. Because "I decided on the boat and Mary on the plane," that feels kind of adjunct-y to me. I don't know about guys. AUDIENCE: It works, but it's still vague. NORVIN RICHARDS: Works but it's still vague? Is that what you said? AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: It could mean either one? AUDIENCE: Either one. NORVIN RICHARDS: OK, good. I was sort of hoping that was true because I don't know any reason why it wouldn't be able to mean either one. This is a really interesting phenomenon. There's a lot of work on it. It's called gapping. And dissertations have been written. [LAUGHS] It's interesting stuff. Yeah? AUDIENCE: When you first read the sentence, I actually interpreted it as "I chose the boat and Mary did so on the plane." NORVIN RICHARDS: Oh, no. What did Mary do on the plane? AUDIENCE: She chose the boat. NORVIN RICHARDS: Oh, she chose the boat on the plane. Oh, yes. Well, that would be OK. [LAUGHS] Phew. I thought you meant that she chose the plane. Yes, no, that's-- no, that's absolutely right. OK, phew. Yeah, that's a nice example. "I decided on the boat." Oh, here-- we can use this tree. "I decided on the boat and Mary did so on the plane." I was taking that to mean "decide" in the first clause is taking an adjunct, "on the boat." And in the second clause, well, "do so" is just replacing "decide." And "on the plane" is another adjunct. You're saying, yeah, "on the plane" is an adjunct all right. But "do so" is replacing the entire verb phrase "decide on the boat," where "on the boat" is an argument. And yes, that's another reading that it ought to be able to have. I was just ignoring that reading. Thank you for pulling that reading out and making me pay attention to it. You're absolutely right. Yeah? Cool. All right. So arguments versus adjuncts-- "decide," "on the boat," "on the plane." "On the boat" is an argument. This means I chose the boat on the plane, at least on the most natural prosody. And we've now seen two ways of distinguishing arguments from adjuncts. One is if you have anything left over after VP-pronominalization-- anything that's outside, "do so." Anything that's still there had better be an adjunct and not an argument because "do so" doesn't select for things. And if you have both an argument and an adjunct, the argument tends to be closer to the verb than the adjunct. Those are the two tests that I've given you so far. And then here's a mini-constituency test that's meant to get you to take seriously the possibility that there is a constituent, "decide on the boat." Look, you can coordinate that constituent with another constituent of the same kind. You can say, "Mary will decide on the boat and read a novel on the plane." That's fine. So, terminology break-- the lower thing is the complement. It's the sister of the head. So "complement" is a name that we use for sisters of heads which, as we've seen, is a position that's kind of privileged for arguments, for things that are selected. So if there's only going to be one thing that can be-- if there's a competition between an argument and an adjunct to be the complement, well, it's the argument that's the complement. That's the thing that gets to be the sister. That's what we're seeing. So here's the principle that I introduced informally a second ago and I'll now say a little more formally. If you've got a head that's selecting for an argument, what we're seeing is you need to take care of the selection requirements of the head first before you do anything else. You must first eat your broccoli before you can have your dessert. You have to take your core classes before you can switch to taking all linguistics classes. That's the way life is. So if you're going to read books quickly, "read" is a transitive verb. It can combine with a noun phrase. And so you're going to want to give it its noun phrase first. That's the first thing you'll merge with "read." "Quickly" is an adjunct. You can do anything quickly. It's not getting selected by "read." We won't find verbs that can combine with "quickly" and verbs that can't. And so, yeah, you need to satisfy the needs of "read" first before you go merging "quickly," which is just kind of an optional thing. And that's why the story is going to be-- we can say, "He read the book quickly," but not, "He read quickly the book," in English. Yeah? AUDIENCE: In languages that have flexible word order-- NORVIN RICHARDS: Yes. [LAUGHS] Let me sit down. AUDIENCE: How do these sentences-- the sentence trees work out if you can rearrange? NORVIN RICHARDS: So you're pointing to the fact that there are many languages out there in which word order is a lot more flexible than it is in English. And you know what? Even in English, it sometimes gets more complicated than this. So let me show you the ways in which it gets more complicated in English. And when we get to that, we will have a tool that we can use to describe what's going on in the languages you have in mind. So let me just get you to hold off on your question for now. Are there any other questions about this so far? OK. All right. In fact, I think we might be about to do it right now. Yeah. So where are we? Trees are constructed by binary Merge. Merge is constrained by selection via this thing I just called the Projection Principle, which says, if a head selects for something, then that should be the first thing you merge. Before, some of you were tormenting me with questions about heads that seem to be selecting more than one thing. And you can kind of interpret the Projection Principle in a way that's sympathetic to that. You get to say, yeah, if you have a head that selects for more than one thing, "give the children books" or "put the book on the table," it has two needs. And you had better satisfy those needs as fast as you can, modulo binary merge. So give it one of its arguments. And then it still has another argument it needs, so you better merge that one next. The tools that we have right now make that a possible answer to those kinds of questions. Questions still floating around about whether those are the right trees, but those are the trees we expect right now. Yep. OK. All right. Yeah? Yes? AUDIENCE: So for a verb like "give," and you are trying to make a construction like "give the children books," would it make sense to instead say that "books" as an argument is an argument for what you get after you merge "gives" with that first argument? It's kind of like-- NORVIN RICHARDS: Yeah, I see what you mean. Sorry. Go ahead. AUDIENCE: Kind of like cutting in lambda calculus. NORVIN RICHARDS: Yes. Yes, it is like that. That's a nice way to talk about it. So I've been talking as though-- let's see if I can say it coherently. I've been talking as though when we look up "give," "give" is going to say, well, I need two noun phrases. Or at least there is a version of "give" that takes two noun phrases. You're asking me, wait, couldn't-- first of all, don't we want to say something more complicated than that? And we saw this with "put" as well. It doesn't just want two noun phrases. It wants two noun phrases in a particular order. So you want to "give the children books." You don't want to "give the books children," meaning give children books. So the first noun phrase in this list is the noun phrase that gets the books, gets the second noun phrase. It's not the other way around. And in "put the book on the table," "put" maybe selects for a noun phrase and a prepositional phrase, but we need to say something more structured than that because it has to be the noun phrase first and then the prepositional phrase. It can't be the other way around. And you're wondering, can we say something like, "give" is a function that takes a noun phrase and gets converted into another function that also takes a noun phrase, something like that. That's sort of the equivalent of currying in or Schönfinkelization or whatever. If you ever study semantics in my department, I have several German semantics colleagues, colleagues whose native language is German and they are semanticists. And there were two people who invented the kind of functions that he's talking about, where you have a function and it changes the function into another function that takes another kind of thing. One of them had the surname Curry, and the other had the surname Schönfinkel. This guy was, I think, British. And this person was German. So most English speakers refer to this as currying in. But if you are German, you refer to it as Schönfinkelization. [LAUGHTER] And apparently Schönfinkel discovered it before Curry, so the Germans are annoyed by the fact that we call it currying. I'm not German, so I'm not annoyed by you calling it currying in. Indeed, we clearly want something more structured than what I'm giving us. And I'm just going to leave your point right there. We need to do something more sophisticated than what I'm doing here. I keep hinting at the fact that we're going to want to investigate these trees, the trees that I'm showing you, with other techniques to find out whether these are the right constituency structures. And when we do that, I think we'll have another reason to revise what I'm telling you now. So there are several reasons to be uneasy about what I'm telling you now. I've shown you a theory which works fine as long as heads only have one argument. As soon as they have two arguments, these kinds of unsettling questions arise. And we're going to come back to them. OK? Good question. Other questions about this? This is great. I feel like I'm in a graduate seminar. OK? All right. So this is where we are then. Trees are constructed by binary Merge. We have the Projection Principle, which says, if there's a selection relation, you need to do that Merge relation first. And that's going to get us contrasts like "Mary wrote the novel on a typewriter" versus "Mary wrote on a typewriter the novel," where the second one is no good because "write" has the option of being a transitive verb. And when it's a transitive verb, it selects for a direct object. And that's the novel. "On a typewriter" is like "with a feather." You can do anything on a typewriter. You can combine with anything. So it's an adjunct. And so there's no hurry to merge it. So we've worked our way to the conclusion that "write," when it's transitive, absolutely, absolutely has to select an object. And the object has to be the first thing you merge together with "write." So it has to be right after "write" in English. That's why you can say, "Mary wrote the novel on a typewriter." You cannot say "Mary wrote on a typewriter the novel." The novel, the thing that's getting written, it absolutely has to be right next to "write." Darn! That last example looks problematic, right? If I ask you, "What did Mary write on a typewriter," we've just worked our way to the conclusion that "write," when it takes a direct object, the direct object has to be the first thing you Merge with "write." It has to be right next to "write." There can't be anything intervening between "write" and it's complement, the thing that's getting written. What's the thing that's getting written in that example? What did Mary write on the typewriter? Joseph? AUDIENCE: "What." NORVIN RICHARDS: "What." Yeah. This sounds like an Abbott and Costello routine. "What's getting written." Yes. [LAUGHTER] But that's not the sister of "write." That's nowhere near "write." It's way the heck at the beginning of the sentence. What the heck is going on? Are you all emotionally distraught? That's what I'm trying to do. Yeah? OK, good. So two possible responses-- oh, well, we didn't have time to get too attached to the Projection Principle anyway. It was just a couple of slides ago. So much for that idea. That's one possibility. So apparently heads can select for things, and they don't have to be anywhere near it. So, well. But, look. Here's another possibility. We could say, no, the Projection Principle is right-- is correct. I should have picked a different verb. The Projection Principle is correct. It is true that a verb like "write" that selects for an object absolutely has to merge with the object first. And so, really what you're doing when you start to ask a question like, "What did Mary write on a typewriter?" is you are merging "write" with "what." "What" is starting off as the sister of "write." So you start off with something more like "Mary wrote what on a typewriter." And then, apparently, there is some operation that takes "what" and puts it somewhere else, at the beginning of the sentence. So-- this was a while ago-- Kateryna asked me, what about languages where word order is freer? And I said, things are about to get more complicated. Things have now gotten more complicated. So, yeah, there's selection. Things select for things. And when x selects for y, then y needs to merge with x right away. And there are adjuncts that can merge wherever they want. And there are things like this-- movement operations. I can spend some time trying to convince you that this is true. But right now let me just assert that it's true-- cases where, indeed, things do merge just the way we've convinced ourselves that they do. They merge where they should. They merge as sisters of verbs. But then something else happens, and they end up somewhere else. So there's this operation of movement where you take the word "what," and you move it to the beginning of the sentence in order to form questions like this one. Notice that in English, and indeed in many languages-- most languages. Not all languages, but in many languages, if I'm amazed by something you've said-- if you say, "Mary wrote a proof of Fermat's Last Theorem on a typewriter," I can say, "Mary wrote what on the typewriter?" So it actually is possible to say these words in this order. I just have to be in a particular emotional state. I have to be astonished. This is actually cross-linguistically extremely common for-- they're called echo questions, where, if I can't believe what you just said or if I couldn't quite hear what you just said-- if you're telling me on the T, and you say, "Mary wrote [GARGLING NOISE] on a typewriter," and there's lots of noise in the background, and I missed the crucial thing, I can say, "Mary wrote what on a typewriter?" These are called echo questions, where I'm repeating most of what you just said and replacing part of it with a wh-word. And it's extremely common cross-linguistically for it to be possible to leave these question words like "what" in the positions where you would expect them to be in those kinds of questions, in the positions where they would be selected. It's not universal, actually. There are languages that don't let you do that. If you're working on a language, it could be interesting to do some finding out about how it forms these kinds of questions. OK, so lots of questions about what's going on in this last example. There are several reasons to think that "what" is ending up in CP. Remember CP? It was the thing that had heads like "whether" and "that." And we said verbs like "know" and "think"-- and "wonder" I think was the example I used before-- they select for clauses. And in particular, they select for properties of the head of the clause. So "think" can select for a clause that's headed by "that." "That" is the complementizer. That's our word for it in syntax. "That" is the complementizer that introduces that embedded clause. "I don't know whether he ate the ants." "Whether" is a complementizer that can be selected by "know." You can't "think whether he ate the ants." And actually there are verbs like "know" that can have either "whether" or "that." With a verb like "know" that can have "whether," it can also have "what" immediately following it. So you can say, "I don't know what he ate." And notice that if it has "what," it cannot have either "that" or "whether." So you can say, "I know that he ate the ants." You can say, "I know whether he ate the ants." And you can say, "I know what he ate." But you cannot say, "I know what that he ate," or "what whether he ate." So "that" and "whether" and "what"-- they all seem to be sort of in the same slot there. They're in the CP somehow. We'll see they're not quite in the same slot. But the idea that "what" ends up somewhere in CP, that seems to be part of what's going on here. And then, as I said, I promised you there will be various reasons to think that "what" starts off as the sister of "write" and moves into CP. But one of them is the one I just went through. Under certain circumstances, communicative circumstances, I can say, "Mary wrote what on a typewriter?" like if I couldn't quite hear you or if I don't believe what you just said. So here's a tree that we're going to draw for "What will Mary write?" It's going to have "what" starting off as the sister of "write," right where the Projection Principle wants it to be. "Write" selects for a direct object, and it gets one. That's the sister of "write." And then it moves to the edge of the CP. And so that's the kind of tree that we're going to draw. This phenomenon-- there's a name for it. It's called wh-movement. It's called wh-movement because these question words-- these words like "what" and "where" and "why" and "when"-- tend to start with the letters W and H in English. "Who" and "what" and "where" and "when" and "why"-- and "how," which doesn't start with the letters W and H, but it does contain the letters W and H, just not in the right order. So these words are called wh-words. And they're called that. It's just a technical term in syntax. We call these words wh-words, regardless of the language we're working in. It's a sort of embarrassingly Anglocentric term. So they're called wh-words because in English they start with W and H. The language that you're working on, they surely will not start with W and H. But we're still going to want to call them wh-words. And this phenomenon is called wh-movement. And it's a cross-linguistically very common phenomenon. There's an English example of it there again. "What did you put on the table?" Here's the Tagalog for "What did you put on the table?" You can see the Tagalog word for "what" doesn't begin with W and H. Tagalog does have W and H, but words can't start with W and H. And there's the Finnish for "where did I put my clothes." I should really learn how to say "what did you put on the table" in Finnish. I don't know why that's my Finnish example. So various unrelated languages that have wh-movement. It's a cross-linguistically very common phenomenon. It's not universal. Some of you, in fact, are native speakers of languages in which wh-words don't move to the left edge of the clause. They just stay where they are. So in Chinese, for example, if you want to say "What did Zhangsan buy," you don't say, "What did Zhangsan buy," or at least you don't have to. The standard thing to say is, literally, "Zhangsan bought what," which, as I said, you can say in English, but only under special circumstances. In Chinese, that's just the natural way to ask that question, is to leave [INAUDIBLE] right where it would be as the object of the verb. And here are a couple more examples-- Bafut, which is a language of Cameroon, and Hopi, which is a language of the American Southwest. So there's cross-linguistic variation in how you form wh-questions. There are languages, like English and Finnish and Tagalog and many other languages, where you take your wh-words and you move them to the left edge of CP. And then there are languages, like Chinese and Bafut and Hopi, where the wh-words just don't move. They just stay where they would normally go, wherever they would normally go in the sentence. There's no movement going on. So in Chinese, if you wanted to say, "Zhangsan bought a book," instead of "what," you would just put the word for "book." Chinese normal word order is subject, verb, object, kind of like English. Same deal for Bafut. Hopi has the verb at the end of the sentence. So these wh-words, these words that mean things like "what" and "who," they're just going where the object would normally go in these wh-in-situ languages. And those are almost the only two options. Take your wh-word and move it to the left edge of CP, or leave it where it would normally go. You can imagine others, right? So, for example, it's not clear that there are any languages in which you take the wh-word and you move it to the end of the clause. You could imagine a language like that, but it's not clear that there are any languages in which you say, "You put on the table what?" I think my next slides make that observation more complicated. And I think I want to skip them, actually. So to make this fact more interesting, let me show you one other kind of cross-linguistic variation. I'm not going to do that. So let me do that fast. We'll come back to this because we're running low on time. And I want to show you one other quick thing. In the remaining five minutes, I'm going to set up another thing. And then we'll come back, and we'll start here next time. I've been concentrating on wh-questions where you're only asking about one thing. In many languages-- not all languages, but in many languages, it's possible to ask about more than one thing at the same time. So you can ask questions like, "What did you give to whom," where, if I ask you a question like that, what I want is a list of people and things such that person x got thing x and person y got thing y. That's what I'm asking you if I ask you that kind of question. Again, there's some variation. So English, as I said, is a language with wh-movement. But it's specifically a language with wh-movement of one WH. So if you want to ask a multiple wh-question, you move one of your wh-words, but you leave the other ones where they are. So you say, "What did you give to whom?" or "What did you give to whom when, where, why, how?" So all your other wh-words just stay where they would normally go. There are other languages in which all of the wh-words move in this big, moving herd to the beginning of the clause. So in Bulgarian, for example, you don't ask literally, "What did you give to whom?" You ask literally, "What to whom did he give?" So "what" and "to whom" are both at the beginning of the sentence. There are bunch of languages like this. Mohawk is another one. Mohawk's a language spoken not too far from here, in parts of America and Canada. These are both languages in which the wh-words all have to move in a multiple wh-question. And again, there's some variation between languages. There are languages that move one wh-word, like English. There are languages that move no wh-words, like Chinese. There are languages that move all the wh-words, like Bulgarian or Mohawk. And again, it's not too hard to imagine other options. So for example, you could imagine a language that only moves two wh-words. That would be a language where you would say, "Who what gave to whom?" to mean who gave what to whom. There aren't any languages like that. The only kinds of languages there are are the English kind-- so first, the Mandarin kind, the Chinese kind, where you don't move any of them; the English kind, where you move one, where you say, "Who gave what to whom?"; and the Bulgarian/Mohawk kind, where you move them all. You say, "Who what to whom gave?" There aren't any languages like this that just move two or move up to two. You can imagine a language like that, but there aren't any. And let me just leave you with this last point, and we'll probably take it up here again next time. Logical problem of language acquisition-- here's a game we can play. Here's a function. It's a function that if you give it 1, the answer is 1. If you give it 2, the answer is 2. If you give it 3, the answer is 3. If you give it 4, the answer is 4. What do you think the answer is if you give it 5? AUDIENCE: 5. NORVIN RICHARDS: 5? Can anybody think of any other options? Ha! You've fallen into my trap. The function that I actually had in mind was this one. [LAUGHTER] Such that the answer is 29. In fact, the answer could be anything at all. So this is the kind of question you sometimes get asked on standardized tests. And it's always unfair because the answer could be anything at all. That first thing which I carefully devised to be 0 as long as the thing I was giving the function was 1, 2, 3, or 4, you could multiply that by anything. And so 5 could be any number. Life is like that a lot. There are lots of places where you go through life, and you get a certain number of finite observations of how life works. And then you have to make a decision about what the basic rule is. So you're doing science, let's say. And you go out there. You see many white swans. At some point, you have to decide, have I seen enough white swans that I get to decide that swans are white? Have I seen enough black crows that I get to say that crows are black? You never know. Maybe the next one is going to be purple or green. It's just hard to know. And this function is sort of a dramatic example of that. Imagine, just in the remaining seconds, that you are a Bulgarian child. Were any of you Bulgarian children? OK, so just imagine counterfactually that you were a Bulgarian child. Here you are, growing up, hearing your parents ask wh-questions. And there's got to be some number, n, that is the largest number of wh-words you ever heard in a question. Maybe you heard your parents say two wh-words in a question. Maybe you heard them say three. It's kind of unlikely that you ever heard them say four or five. I mean, it depends, I guess, on what your parents' line of work was. But there's some number that's the largest number. And it's probably different from Bulgarian to Bulgarian. So what we might expect-- so Bulgarians are sort of in the position you guys were in with that function. If you just hear this, let's say, "What to whom did he give?" you've got to ask yourself, is the rule move all the wh-phrases? Is it move two of the wh-phrases? Is it move maximally three, maximally four, whatever? There are literally infinitely many rules that are compatible with this data point. And so you might have expected that Bulgarians, depending on how weird their parents were, would grow up with slightly different grammars, that when you took Bulgarians and you put them into a hyperbaric chamber and you gradually added wh-words to the questions, that they would diverge as the questions got more and more complicated, that there would be the two-wh Bulgarians and the three-wh Bulgarians and the four and the five, that you would find that out. That's not what we find out. The experiment has been done. What you find out is the Bulgarians are all the same. They all move all their wh-phrases. What we think is going on really is that being a human being means having the kind of mind that can put language together in some ways but not others. There was never any danger of the Bulgarian children entertaining the possibility that the grammar was move two wh-phrases. They knew that there aren't languages like that. Human languages don't work that way. This is what we mean when we talk about universal grammar. We'll review this next time because we're out of time. But we'll talk more about this next time. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_9_Phonology_Part_2.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: OK. So last time, we ended on one of these cliffhangers. And so I rushed you through a bunch of Japanese data. And then we got to this new bit of linguistic technology-- [BUZZING] that's going to be annoying-- this new bit of linguistic technology and went through it pretty fast. So what I want to do is do the run up to that again a little more carefully and show you what it is that we're talking about. What we were talking about at the end was the position of accent in Japanese compounds. So here are a bunch of compound nouns in Japanese. And the observation was that when you have a compound consisting of two parts, the result always has one accent. It has accent on one particular syllable. The accented syllable doesn't have to be the accented syllable in either of the original parts of the compound. So "raw egg," which is the first example in this slide-- "raw" and "egg"-- they have accents where they have accents. "Raw" has an accent on the first syllable. "Egg" has an accent on the second syllable. But "raw egg" has an accent on the syllable which is not normally accented in "egg." What we said was there is a set of rules for where accent goes in these compounds. And the rules go something like this. There's only going to be one accent. It needs to be next to the boundary between the words. And if possible, it should be on the second word. That is, it should be after the boundary between the words. But you don't do that if that would result in the accent being on the last syllable. So the general pattern is the one you're getting in "French fry" up there-- "furaido POteto"-- where you get accent at the beginning of the second half of the compound, "potato." Yeah. That's what you get in most of the examples on the preceding slide. If we go back to the preceding slide, you can see that in "raw egg," right? It's at the beginning of "egg." And you can see it in "field mouse." It's at the beginning of "mouse." And you're also getting it in "French fry." But there's this coherent class of exceptions. If the second half of the compound is only one syllable long, then you don't put accent on the second half. You put it at the end of the first half. So Kagawa prefecture-- "kagawa" is the name of a place. And it's got accent on its first syllable. But the expression for "Kagawa prefecture" has accent at almost at the boundary. It's right before the boundary between them, not on the second half of the compound, unusually, because the second half of the compound is only one syllable long. So we get "kagaWAken." Yeah. I'm telling you all this not just to educate you in what a cool and complicated language Japanese is, but to show you that there are cases of cool and complicated phenomena that can be fruitfully thought of as a series of things that you're trying to do which are in conflict with each other sometimes. So Japanese wants the accent to be on the second word and next to the boundary between the words. But it also wants it not to be final. And if it can't have all of those things at once, like in "Kagawa prefecture," it has to decide, do I care more about it not being final, or do I care more about it being on the second word? And Japanese cares more about it not being final, right? So it retracts it from the second word to the end of the first word. Does anybody still read-- Isaac Asimov had these stories, the robot stories. He had a book, I, Robot. It was a series of stories. Some of you have heard of them, at least? Yeah. So they feature robots that have these laws governing how they work. And the laws act like these kinds of laws. So there's the first law. And then there's the second law. And the rule for the second law is you must obey the second law unless it comes in conflict with the first law. Then you disobey the second law and do the first law, right? And there's a third law, which you should do unless it conflicts with the first or the second law. So this is like that, right? So Japanese wants the accent to be on the second word, oh, unless that's going to make the accent final. Then you don't do that. Does that make sense? That's a way-- or to put that another way, don't put the accent on the last syllable is more important than put the accent on the second word. That's the picture that we end up with. So you get "kagaWAken" with the accent at the end of "kagawa" and not "kagawaKEN." You don't put accent at the end. There's a standard way of representing these kinds of facts, these bodies of facts, which involves this table that I've got here at the bottom of the slide. This was an approach to phonology that was invented by people who-- one of them in particular, Alan Prince, is a brilliant, brilliant man. But he enjoys intimidating people. And so instead of calling this a table, he decided to call it a tableau. So he gives it a French name in order to be as intimidating as possible. So I'll adopt that because that's what phonologists, linguists standardly call these objects. This is a tableau. And what the tableau represents is this interplay between the different laws that are governing where the accent goes. The idea is the most important laws are toward the left. And the least important laws are toward the right. And what you do is imagine all of the places where you could put the accent. And you ask yourself, how does that placement of the accent square with the various different laws? The little asterisks mark places where one of the candidates, one of the options you're considering, violates one of the laws. So for example, the first one, which kept both of the accents in the parts of the compound, just kept them both, violates the most important law, which is you may only have one accent. So it gets a star right there. And that eliminates that one from competition because it's violated the most important law, the one that's furthest on the left. And then what I've done is consider examples where the accent is-- it's a four-syllable word, so we're considering putting the single accent on any of the four syllables. The first two, which have the accent on the first or the second syllable, violate the second law, which says put the accent near the boundary between the two halves of the compound. So those are ones in which the accent is not on a syllable that's next to the boundary. And then the next to last one, which is the one that wins-- that's the one that obeys all of the conditions except for the least important one, which is it's best to put your accent on the last word, right? You notice, there is no perfect candidate. There isn't anything that doesn't have any stars. What the tableau does is allow us to think in a formal way about how you satisfy conditions that are not compatible with each other. And I just realized I get to take this off. I keep forgetting that. Yeah. So that's what the tableau is meant to represent. Are there are questions about the tableau? There's a standard way to represent the fact that the winner is that next to last one there, where the stress is on the next to last syllable, the accent is on the next to last syllable, which is to put a pointing finger next to it like that. This is the one that wins. Optimality theory-- this approach to phonology is called optimality theory. And I don't know. I guess they just liked the font that had the pointing finger in it. So they decided to use it. OK? This is an approach. Yeah? STUDENT: Question about-- there is an accent on-- how do we know there's an accent on "ken" if it just never shows up because every time [INAUDIBLE]?? NORVIN RICHARDS: Oh, you mean why am I putting an accent on "ken" over there? STUDENT: Because every time "ken" shows up, one syllable is at the end. So there's no-- so it's always the final accent. So how do we know there's an accent? NORVIN RICHARDS: Yeah, that's a very good point. It is possible to pronounce "ken" by itself, not as part of a compound. So if you want to talk about "the prefecture," it's a word. It means "prefecture," which is like a state. It's an administrative subdivision of Japan. And it's possible to say that word by itself. STUDENT: So unbounded. NORVIN RICHARDS: Sorry? STUDENT: So it's unfounded? NORVIN RICHARDS: It's? STUDENT: It's unbounded? NORVIN RICHARDS: "ken" is unbounded? I don't think I know what you mean. Oh, it's not a bound morpheme. That's right. It can be a free morpheme. Yes, yes. That's right. So you can say in Japanese, "The prefecture is large" or whatever. And then you can show that it's accented, yeah. Good question. There are other questions people have about this? OK. So this is just meant to illustrate the workings of optimality theory. Why am I showing you this one? I'm not sure. OK. Oh, I see. This is a similar tableau for "French fry." So this has the same constraints, these things that I put in all caps up at the top. In this case, you do end up with accent at the beginning of potato on the second word. That actually is a perfect candidate. So this is an example in which none of the laws are violated by the best candidate. The laws aren't in conflict with each other. So you can put accent at the beginning of the last word, "potato." And it won't be final. Yeah, yeah? STUDENT: So for example, if this perfect candidate didn't exist, and you were having to choose between the first and the third-- NORVIN RICHARDS: Yeah. STUDENT: --you would want to choose the third because it violates a rule that's less important. NORVIN RICHARDS: Yeah, that's absolutely right. So if, for some reason, that fourth candidate were impossible-- say, there was another very high ranking constraint-- don't put accent on "potato," then yes. The third would win. That's right. And actually, we can see that by looking at the behavior of "kagaWAken." So if we eliminate the option of putting accent on the last word, that's where the accent goes. It goes at the end of the first word. So it's the best you can do, right? Still near the boundary, still only one accent, just didn't manage to accent the last word, which is what you'd like to do. But it's not high on your list of constraints. Yep. Does that make sense? So this is an approach to thinking about phonology. Optimality theory is the name for it. It says-- in a way, it's a comforting theory-- It says, "Do the best you can." You have all of these things. Probably, all of you have encountered things like this in other domains at MIT if you've ever dealt with engineering problems. There are probably cases where you have various things that you want to achieve. And you can't optimize all of them. You have to decide which ones are the most important. And the idea here is that language is like that. There are cases where you have to decide among your various desiderata which ones you care about the most. The hope in optimality theory is that we'll end up with a list of these constraints, these laws, these rules that will be universal. And the differences between languages are just a matter of deciding what order the different constraints are in. So there could be languages in which the one accent rule is very low ranked and the first candidate maybe wins, right? You could imagine a language like that. Tokyo Japanese doesn't work that way. So differences between languages are all a matter of reranking the constraints. That's the mission statement of optimality theory, is to try to figure out how the world could work that way. Yeah? OK. Optimality theory was partly inspired by the discovery of these kinds of conspiracies that we started talking about when I was showing you some stuff about Yawelmani. And now we've seen another example from Japanese. For Yawelmani, the conspiracy was, I don't want three consonants in a row. And I'll do various things to achieve that. We said you could state rules governing the various things. And maybe in a sense, you should. But you-- the idea is, let's try to keep sight of the fact that all of these rules are trying to achieve the same thing. We want to also capture that kind of a fact. And there's something similar going on here in Japanese. One of the rules, the highest ranking rule, was "Don't have more than one accent in a word." And what we saw last time was that there are various things that Japanese does to achieve this depending on the situation. So there are words like "gurai," which I think I translated last time as even, which are accented. And if you attach them to another word that's also accented, you delete the accent on the first word, in the word you're attaching "gurai" to. And there are other words, like "made," which means "to," or "up to," "as far as," which are accented but only if they are attached to unaccented words. And then we've done all this talking about compounds. These are all different kinds of examples in which different things happen. There are different strategies, right? So for "gurai," it's "Delete the accent on the word you've attached 'gurai' to." For "made," it's "Delete the accent on 'made' for the compounds." We've now talked about this computation that you go through to try to figure out where the single accent should go. But what all of these things are doing is trying to arrange for there to just be a single accent. And one of the virtues of optimality theory is that it allows us to make that part of our analysis. If we just stated these rules, we'd be losing sight of the fact that these rules all seem to be trying to achieve the same thing. That's one of the things people who work on optimality theory are trying to get. And then of course, the goal is to try to understand why we have the constraints that we have and not others. That's another thing people who work on this try to do. Yes. I just said that and that. So remember, Yawelmani, which was "Avoid strings of three consonants either by inserting vowels or by deleting 'h' if you have an 'h'." So I've been saying we could just state these rules. But we'd be missing a generalization. Yawelmani is trying to avoid three consonants in a row. Japanese is trying to avoid two accents in a single word. Maybe it's obvious. We don't want to miss generalizations like that. We want to find them and understand them. So similar kind of thing that we could think about here-- we've talked now a couple of times about Polish. Polish is one of the number of languages in which if you have certain kinds of consonants at the ends of words, consonants like "b," and "d," and "g," and "z," we convince ourselves that they devoice. They become voiceless at the ends of words. That allows us to capture various kinds of facts about what happens to Polish words when they become plural-- namely, that consonants that sound voiceless when they're at the ends of words become voiced sometimes when they're no longer at the end of words, we decided. That's because they're underlyingly voiced. And they devoice when they're at the ends of words. So this is all familiar. You all remember Polish? It's not just Polish. There are lots of languages that do this. And so what I'm inviting you to do at the end of this slide is to imagine two kinds of stories about Polish-- one that would say, yeah, final consonants devoice in Polish, and another that would say, well, there are four rules, actually. There's a rule that says "b" voices when it's final, and a rule that says "d" devoices when it's final, and a rule that says "g" devoices when it's-- yeah? Both of those would capture all of the Polish facts. One of them is clearly better. So we want to understand why we get the particular phenomena that we do in Polish. Is it clear to everybody that it would be better to do things that way? And what we're doing here is similar. Why would it be better? What would be wrong-- why is it better to be able to say final consonants devoice than to say, well, we have these four rules. Yeah, Faith? STUDENT: I think you're addressing the more general phenomenon. It's very difficult to catalog every single letter and to have vowel sound as something like that because you eventually have a list that's very long. NORVIN RICHARDS: Yeah. So it might make it easier to make cross-linguistic comparisons. Yeah, that's a very good point. Yeah, yeah. Yeah? We also maybe-- I don't know if this is a good way to say it. Maybe it is. We want to understand why the sounds that become voiceless at the ends of words in Polish are "b," "d," "g," and "z," and not "b," "d," "a," and "o," right? If we're just making four rules for those four letters, then it's not clear. There's no reason why one of those rules is more natural than the other. But as I keep saying-- what's happening here in Polish is extremely common. So you get it in Turkish. You get it in Russian. You get it in a bunch of different languages-- German. And so we want to understand why it's common. And part of understanding why it's common is being able to say that that's a common rule, a standard rule that we see all over the place. And then we can get to work on understanding it. If we just have these four rules, then it's unclear why those four rules are so common, whereas "b," "d," "a," and "o" devoice at the ends of words is not at all common. I don't know of any languages that do. Does that make sense? That's another sense in which making generalizations is good. This is becoming a philosophy of science question, yeah. It's like, why is science good? Why is it good to make generalizations? Because it helps us to understand the world better, for Faith's reason, as well as the reason I just offered. Yeah? STUDENT: Is the vowel still a vowel when you devoice it? NORVIN RICHARDS: Yeah, yes, arguably. So there are languages that devoice vowels. Mokilese is one. Oh, Japanese is another. So Japanese has a general rule devoicing its high vowels when they are between voiceless consonants, I guess. So here's the Japanese word for "like." And it's pronounced "suki," "suki." So the oo isn't pronounced. You don't say "soo-kee" unless you're being very slow and careful. And it's still a vowel in the sense that your tongue and the rest of your vocal tract is still in the configuration that it would be if your vocal chords were vibrating. But they aren't. So yeah, there arguably are voiceless vowels. Yes, Joseph? STUDENT: What if you were putting an affix after that, which is "i" is high, too? So what would happen? NORVIN RICHARDS: Oh. So in this particular example, there aren't any affixes you can add to "suki" to devoice the vowel. But let me think of an example. Yeah. So "tatsu"-- so this is not high. So it doesn't devoice. This is the verb "to stand." The more polite version of this-- no, no, no, no. That's not what I want to do. Let me try to think of an example. Let me try to think of an example that does what you want. But there aren't any examples that will work with "suki." Yeah. OK, so when I think of an example, at 3:00 in the morning, I'll give you a call. Leave your phone number and think of that. There's got to be a straightforward way to do this. Hmm, this is going to bother me now. Yeah. Other questions about this? Try to think of questions while I try to think of examples that do what Joseph wants. Because the past tense suffix in Japanese-- oh, oh, oh. [JAPANESE] oh, yeah, OK. Here's one. Adjectives in Japanese. So this is the word for new, "atarashii." And the negative of it, "atarashiku nai" ends-- so there's a "ku" that you add to make the negative form. And it ends up putting this vowel between voiceless sounds. And then yes, it devoices. So it goes from "atarashii" to "atarashiku nai," "atarashiku nai," or the past tense, actually. If you want to say something was new, it's "atarashii" [JAPANESE]. So again, the "i" that's after the "sh" devoices because there's a voiceless sound after. Good. Phew! Now I won't call you at 2:00 in the morning. You have any questions about this? OK. OK. So let's go back now to the English plural suffix and review something that we have talked about. But in the light of all of the things that we've talked about since then, I hope it'll become a little clearer. So again, we're talking about the regular English plural suffix. Stop thinking about "oxen" and "fish" and "sheep." "Cats" and "dogs" and "bushes"-- we decided the underlying form of the plural suffix is a "z," right? And it undergoes various kinds of sound changes depending on what's before it. So you add "z" to all three of these underlyingly. And then there are generalizations like English doesn't like words that end in a "z" that's preceded by a voiceless sound. And words don't get to end in two strident consonants, where stridents are things like the "sh" and the "z" that's at the end of "bushes," yeah? So those are generalizations. But we said one of the attractive things about this way of thinking about the English plural is that these are generalizations that just seem to hold about English generally. English doesn't have any words that end in a "z" preceded by a voiceless sound. And we can rely on English phonology to demand that you do something if you add "z" to cat. And again, we've said this before. I just want to say it again a little more clearly. There are these conditions. And then there are also these procedures for repairing these violations. So what do you do about the fact that you add a "z" to cat? Well, you devoice the "z," right? If the condition just says you can't have a "z" preceded by something voiceless, there are various things you could have done to fix that problem. You could have voiced the "d," right? So the plural of "cat" could be "cads," right? That would be a good English word. Or you could add a vowel like you do in "bushes," right? The plural of "cat" could be "cat-es." There are various things that would fix that problem if that's the problem. But this is the thing that you do. Yeah, it's like when we were talking about Yawelmani We said you can't have three consonants in a row. What do you do about it? Well, it depends on the situation, right? So if one of them is an "h," you get rid of the "h." If not, then you insert a vowel in this position and not that position. It's similar thing. So this is all the way of saying it's worthwhile, maybe, to distinguish between the phonotactic considerations or the well-formedness considerations that tell you this language doesn't like three consonants in a row, or two accented syllables in a word, or words ending in a "z" preceded by a voiceless sound. There's that condition. And then there's how do you repair it? What kind of a repair do you use? So there's the well-formedness condition. And then there's the repair. So in this particular case, the procedures are devoice and insert schwa. There's a particularly striking bit of evidence for this way of dividing things up, this distinction that I'm drawing between well-formedness conditions on words, on the one hand, and types of possible repair on the other. We need to think about both of those things if we're going to describe what the language is doing. This is a classic test from 1957. I mentioned it in class before, but I wanted to talk about it a little more now that we've drawn this distinction. So Jean Berko, later Jean Berko Gleason, did this work in 1957 with children. She was demonstrating that they know these generalizations about plurals even with words they've never heard before. So she would show them pictures like this one. This is one of her pictures. She said "This is a wug." And then she would show them two pictures of the same thing. And she would say, "Now there are two of them. There are two"-- and children were supposed to say "wugs" even though they'd never heard anybody say "wug" before, right? So Jean Berko Gleason, Jean Berko made up a bunch of words. She worked hard to make sure they were not real words. And then she drew many cute pictures and showed children singulars and plurals and said, OK, here's the singular. What's the plural? And the kids responded. And she wrote down their responses. And their responses, interestingly, were accurate in one way, and sometimes inaccurate in another, which is interesting to think about. So she was working with children who were five years old and children who were seven years old. What she discovered was that by the time they were five years old, they are, in a sense, perfect, in the sense that all of the words that they came up with were phonotactically well formed English words. So here are three of Berko's words-- "hif," "fas," and "muz," yeah? And I think everybody did "wug" perfectly. Everybody said the plural of "wug" is "wugs." But there were other cases where children would say, sometimes, things like the plural of "hif" is "hifes," or the plural of "fas" is "fas," or the plural of "muz" is "muz." These are all words where-- and then there were other kinds of mistakes that children didn't make. So nobody said that the plural "wug" was "wuks," or that the plural of "wook" is "wookz," right? Or that the plural of fas is "fas-z," yeah? What were children doing? Well, children know if you show them something that's underlyingly-- you give them "hif," which for some reason I'm spelling backwards-- you give them "hif" and you ask them the plural, they know that you're supposed to add a "z." And they know that English doesn't allow words that end in a "z" preceded by a voiceless sound. But some of these five and seven-year-olds were a little fuzzy on the possible repairs. None of them said, "Oh, yeah, It's hifz," right? But I just said this. If the only problem is, well, this word ends in a voiceless sound followed by a "z," in fact, what you're supposed to do is devoice the "z." That's what an adult would do. So the plural of "hif" is "hifs," yeah? But if you were instead to insert a schwa, well, that would fix it, too, right? So now the "z" is preceded by a voiced sound. The plural of "hif" is "hif-ez," yeah? And there were some kids who were going through a stage who were entertaining that possibility, that that was the correct repair for that. So what Berko found was that children universally knew the phonotactics, right? They knew that English doesn't allow words that end in a "z" preceded by a voiceless sound. But they made intriguing mistakes of various kinds-- not very often. Most of the kids just did everything perfectly. But every so often, she would get kids who made some mistakes. And the mistakes were of this class. That is, mistakes where they knew that there needed to be a repair but got the repair wrong. Similarly for "muz," this was a kid who-- the singular is "muz," and the kid knows that you need to add a "z." And the kid knows that you can't have two stridents at the end of a word. The correct answer, the adult's answer, is to insert a schwa so the plural of "muz" will be "muzes." But deleting the "z," one of the "z"s-- hard to know which one-- also fixes that problem. And so that was a kid who thought, yeah, I'll delete a "z," yeah? Yeah, Joseph? STUDENT: Is the deletion of a consonant viewed as [INAUDIBLE]? NORVIN RICHARDS: As something that English does? Yeah, no. So English-- there aren't, I don't think, any English nouns that form their plural by adding a "z" and deleting something before the "z." I don't believe that ever happens. This is me slowing down while I try to think of English nouns, of which I know several, but I can't think of any that do that. Yeah. Yeah. OK. Yeah, so is this clear? Another reason to draw this distinction between, on the one hand, what I was calling phonotactics, these rules for which kinds of sounds can be where, and on the other hand, the repairs-- so the kids that Berko was working with knew the phonotactics, but were sometimes fuzzy on what the correct repairs were. Yeah. OK. So, moral-- yet again, need to distinguish between, on the one hand, the phonotactics, and on the other hand, the particular sound changes, the repairs that enforce the phonotactics. It's apparently possible to know one of those but not the other. OK. That ends the chapter. Now we're going to talk about something else. So are there any questions about that? Is that clear? OK. We've already done some of this, but I want to do more. We've been talking about how there are some kinds of sound changes that are natural because they apply to natural classes of things that can be almost circular. So I quickly introduced you to the word stridents because the generalization about English plurals is that two stridents can't end a word in English. And that's why you introduce a schwa just in case you're in danger of having two stridents at the end of your word, yeah. The word strident has an acoustic definition. They are fricatives that have lots of high frequency noise. But it's of interest that there are phenomena like the allomorphy in English plurals that make reference to that acoustic fact. That's apparently something that a language can care about-- how much high frequency noise you have at once. It's also something that varies from language to language. There are languages that are perfectly happy to have long strings of stridents. English just isn't one of them. Yeah. So we've already done, as I said, some talking about this. So when I was first introducing you to consonants or introducing you to vowels, you'd already met consonants at some point in your life. But we were talking about them in a particular systematic way. We talked about various ways of classifying them, saying there are some consonants that are fricatives and some consonants that are stops, or saying there are some consonants that are voiced and some consonants that are voiceless. So what I want to do today is to introduce you to some other kinds of classification. And I'll, as much as possible, try to show you places where languages care about a particular class of sounds. Again, this is all in the service of making it easier for us to understand why certain kinds of sound changes are natural and others are not. OK. So here's one that we've already seen. There are sounds that are nasal. This is a contrast that I believe I have introduced you to before, but let me introduce you to it again. There's a distinction-- I think I brought this up last time, actually. There's a distinction, when we talk about consonants, between what are called sonorants and what are called obstruents. So these are opposites of each other. I hope I have the word "obstruent" on a later slide. But just in case I don't, there are these two kinds of sounds-- obstruents and sonorants. And the distinction between them has to do with whether you are building up pressure in the oral cavity. So the sonorants are the nasals and the liquids and the glides, the consonants in which there's no pressure building up in the oral cavity, whereas the obstruents-- those are the stops and the fricatives, the oral stops and fricatives. Those are the sounds in which there is pressure building up in the oral cavity. For a stop, it's because, well, you've stopped the flow of air at the mouth. And so pressure is building up. And for a fricative, well, you haven't stopped the flow of air, but you're constricting it. That's how you make a fricative. And so pressure is building up in the oral cavity. And when I say pressure is building up in the oral cavity, what I mean is that phoneticists have put instruments inside people's mouths to measure air pressure to find out what happens. And they have discovered that this is true, yeah. So I try not to lie to you in this class any more than I can help it. But in this particular case, what I'm saying is absolutely true. OK, so there are sonorants and there are obstruents. We have already talked about the classification of sounds into voiced and voiceless sounds. So the red sounds on this slide are voiced. And as you can see, in this cross-cut-- some of the other classifications. So you can be an obstruent, like a stop or a fricative, and you can be either voiced or voiceless. So English stops can be voiced or voiceless. English fricatives can be voiced or voiceless. English sonorants, interestingly, can only be voiced. They cannot be voiceless. We have sonorants like "l," like "l," but we don't have a voiceless version of "l." We talked about this when we were first going through sounds. There are languages out there that have a voiceless version of "l," that have a "lh" sound. Tibetan has one, for example. So if you go to Lhasa, the capital of Tibet, you must learn to make voiceless "l"s. Welsh has a voiceless "l.". They spell it with two "l"s. So if you look at Welsh words, you'll see pairs of "l"s. That's their way of spelling a voiceless "l." So there's a traditional Welsh name spelled like this and pronounced "Lhoyd," which has been borrowed into English twice as the name Lloyd and the name Floyd. These are both attempts to pronounce "Lhoyd," which we can't pronounce because we don't have that sound. Yeah. So English has voiced and voiceless obstruents, stops and fricatives. But it only has voice sonorants. That's a very common state of affairs. If you're going to have a voicing distinction in only one of these places, it's in the obstruents. Languages only having voiced sonorants-- that's very common, though not universal, as we've said. Quick interlude to use one of these features. Here are a bunch of verbs that you can make out of adjectives. So you can take an adjective like "black." And you can make a verb, "blacken," which can be either a transitive verb or in intransitive verb, I guess. So you can blacken the paper or blacken the fish, I guess. It means to make it black. Similarly, something can whiten or lessen or freshen or darken. You can offer to freshen a drink, say. It means to make it fresh, right? So there's this "-en" that you can add to adjectives to make verbs. Does that sound right? On the other hand, there are these other adjectives which can't have "-en," yeah? So the screen of a computer can darken, but it cannot "yellowen," yeah? A piece of paper that gets old-- what you say is that the yellows. It yellowed with age. You don't say that it "yellowened," yeah? Similarly, you can't "dimmen" or "grayen" or "clearen" or "brownen." All of those things-- if you want to make them into verbs, you just [SNAPS]. They're verbs, yeah? Yeah? Does that sound right? What's the difference between "black," "white," "less," "fresh," and "dark" on the one hand, and "yellow," "dim," "gray," "clear," and "brown" on the other? STUDENT: They end with voiceless stops in the first line? NORVIN RICHARDS: In the first line, "black," "white," and "dark" end with voiceless stops. And "less" and "fresh" end with voiceless fricatives, yeah? Yeah, yeah. Yeah, nice. And "yellow," "dim," "gray," "clear," and "brown" end in voiced sounds. What's the cover term for stops and fricatives? STUDENT: Obstruents. NORVIN RICHARDS: Obstruents. So "-en" can be added to words ending in obstruents. And it can't be added to adjectives ending in sonorants, yeah? Another place where it's useful to distinguish sonorants from obstruents-- Polish plurals-- I think this is the last time I will say anything about Polish plurals. I can't promise because it's a very useful example. But if you're getting tired of Polish plurals, I apologize. I think we're just about to the end. On the other hand, if you've become fond of Polish plurals, please gaze at this slide for a while. This is your last big chance. What we said about Polish plurals was that it's useful to say that there are words that end underlyingly in sounds like "g" and "b" and "z." And those final sounds, "g" and "b" and "z," devoice when they're at the ends of words. So underlying "wuk," the word for "lye," becomes "wuk" if you don't add anything after it. And same with the word for "club" and the word for "rubble." And I have been incautiously saying things like, "Polish devoices its final consonants," which was a lie. I think I told you just now that I try not to lie to you any more than I have to. It doesn't devoice its final consonants, right? Because Polish has words like "dom," which is the word for house. And that ends in a voiced sound, right? "Dom" doesn't become "dom" [pronounced with voicelesss "m"], yeah? Right? So it's not the case that final consonants become voiceless in Polish. What are the things that become voiceless in Polish? Obstruents, yeah? Stops and fricatives. Those are the things that become voiceless in Polish. Relatedly, Polish is like English in having voiced and voiceless obstruents, but not having voiced and voiceless sonorants. So Polish-- the "m," in a sense, doesn't have anywhere to go. Polish doesn't have that sound. Common situation. So it's really final obstruents that are becoming voiceless. OK, all right. So that's a reason to want to care about the distinction between sonorants and obstruents, not just because you want to understand why it's "whiten" and "darken" and "lessen," but not "yellowen." It seems to be a distinction that cares about sonorants and obstruents. Here's another one. Polish, which as I've said now several times, and I think this might be the last time-- Polish is doing something that is very common in languages of the world. It's devoicing final obstruents. That's a thing that happens a lot. All right. So phonology so far-- we've been doing phonology for a couple of days now. It's useful to think of sounds as undergoing changes sometimes driven by sounds coming in contact with each other. And we can describe these changes in terms of rules. There's the formalism for rules that I've been showing you. So we'll say sound A changes into sound B. The slash introduces you to the environment. This is a description of what's happening in Polish, and Turkish, and German, and Russian, and lots of other languages. Final obstruents-- so if you are an obstruent that is something which is not a sonorant, you become something which is not voiced. If you are at the end of a word-- that environment description is the way to write at the end of a word. That hash symbol is the symbol for a boundary between words. And the underscore is where the change takes place. So this rule says this change happens when the sound is before a word boundary, or to put it another way, when it is at the end of a word. Does that make sense? This is a way of writing rules. These rules are part of our knowledge of a language. So we've talked about this. As adults, anybody who is a native speaker of English knows how to make the plural of "wug," or how to make the plural of "Bach," even. So how to make plurals of words you've never heard before and even words that end in sounds that your language doesn't have, right? So if I insist on calling the composer "Bach"-- if I insist on the original Inupiaq pronunciation of the word "qaiaq," which is kayak, a single person boat, which in Inupiaq begins and ends with a uvular stop-- so if I'm trying to impress you with my ability to make uvular stops and I insist on calling a kayak a "qaiaq," then my plural for it is going to be "qaiaqs," right? So I'm going to add an "s" sound. I'm going to devoice it because that uvular stop, although it's a sound we don't have an English-- it's voiceless. So we're going to devoice after it. So these rules are part of our knowledge of a language. And we can apply them to unfamiliar problems, unfamiliar words, and even unfamiliar sounds. And I've now tried to convince you the best way to state these rules, sometimes at least, is in terms of features. That is, we don't just want to list the sounds that undergo a change. We want to have these features which are ways of capturing sets of sounds. We think that there are natural classes of sounds that undergo particular changes. So there are sound changes that apply, say, to obstruents, to sonorants, or to vowels, to voiced sounds, or to voiceless sounds. We're not just going to list the sounds that undergo sound changes. We're going to state these in terms of features, OK? And then the other thing that I've introduced you to is the idea that at least sometimes, it's fruitful to model these kinds of sound changes as involving ranked constraints. This was the optimality theory tableaux-- that is, to imagine that languages have a list of things that they are trying to achieve. And it's a list that's ranked in order of priority. And you get to observe the interplay between these different priorities in certain kinds of complicated cases. The Japanese accent case was one of the ones that we looked at. OK? And I think that's it for this review slide. Was there something else? Oh, OK. Is everyone happy with all of this? Is there any of this that people are looking at and going, no, wait. How did-- how did we see that? Or what does that mean, or anything like that? Just an attempt to make sure that we're all n the same ballpark here. Yeah? STUDENT: Could you clarify what "feature" means? NORVIN RICHARDS: Oh. All I mean by "feature" are things like voiced, or sonorant, or those kinds of things. Yeah, good question. And a standard way to write rules involves putting features in brackets like that. So the fancy way to say something is voiceless is to say that its value for the feature voice is minus. So it is not voiced, yeah. That's all that means. Yeah? OK. When we ask you to do phonology, we will tell you whether we want you-- to what extent we want you to be formal about how you write your rules. So I think the problem set that I just sent out asks you to look around for possible rules in-- what is it? It's Greenlandic data, I think. You don't have to sweat this level of formality in your rules. So if your rule for that problem set-- so if you think you've figured out what's going on, you can just tell us, more or less. Don't feel like we're going to count off for not using features or not writing your rules this way. OK. All right, then. Now I told you just now-- I tried to convince you earlier-- that there are cases where-- the Japanese accent case was the one that we spent the most time on, where it's fruitful to think of the phenomenon as involving a bunch of ranked priorities, a bunch of constraints that are all pulling in different directions. And you solve that problem by deciding which is the most important one, which ones are more important than which other ones, ranking your various constraints with respect to each other. There are other cases, though, that are problematic for that way of thinking about things. And there's big literature on trying to reconcile these kinds of observations. On the one hand, the observation that there do seem to be these conspiracies where a language is doing the same thing in a variety of ways, which is the kind of thing that makes people want to think in terms of ranked constraints, and on the other hand, the kinds of cases, which we've talked about before, where it looks more like what's happening is that you have a bunch of rules. And they're applying in an order. And they don't necessarily all get obeyed. We did this on the board-- when was it? Last time, I think. But let's do it on the slide a little more carefully. So cast your minds back to when we were talking about Lardil. I tried to convince you-- we worked out an analysis of Lardil in which, again, bare nominative singular-- Lardil doesn't mark number. But bare nominative forms of nouns undergo various kinds of sound changes. So Lardil has words that underlyingly end in a "k," for example. And there was a general rule in Lardil that said if you have a word that ends in a "k," get rid of the "k." So I'll start writing these down. Some Lardil rules-- one of them was eliminate-- and I won't try to write them formally. Eliminate final "k." So the word that we think is underlyingly "wangalk," the word for boomerang, in its nominative form is "wangal." And the reason we think that it's underlyingly "wangal" is that if you add any suffixes to it, you get to see that "k." So the accusative, I think I told you, was [LARDIL].. So the "k" shows up, and in other forms as well. We thought there was another rule that said a final "u" becomes "a," yeah? And that was why "kandu," which is the word for "blood" underlyingly, becomes "kanda" in the nominative. And in the accusative, it's [LARDIL].. That's why we think it starts out as "kandu." So we have these two rules. And what we said when we were doing Lardil before was something like we might want these rules to apply in a particular order. In fact, I see that I have written them in the wrong order. If you were to take the word "ngaluk," which is the underlying form for "story," and apply this rule and then this rule, well then, you would eliminate the final "k." That would get you ngalu. And then you would change the final "u" to "a." And then you would get an "ngala." But that's not, in fact, the nominative form for "story." The nominative form for story is ngalu, yeah? And so what I said at the time was one way of thinking about this, a popular way of thinking about this is, to think there are these rules. And it's like an assembly line. The first station is the one where you do this, where you get rid of final "u." And then after you're done with that, you get handed off to another place in the assembly line where they eliminate final "k." And what's happened with that "ngaluk," underlying "ngaluk," is that it got handed to the first station first. First station said, well, this word, "ngaluk"-- it doesn't end in "u." I'm not going to do anything. And then it gets handed to the second station. The second station says, oh, I need to get rid of the final "k." It'll be a "ngalu," by which point the people in the first station might have been saying, wait, wait. Let us have that back. But it's too late, was the idea. Yeah, so these are rules that apply in a particular order. That's the way we were talking about Lardil. Does that all sound vaguely familiar? There are these procedures, and the procedures happen in order. But if we imagine, thinking about things in an optimality theoretic kind of way, well, it's not hard to come up with constraints that say this. Star, final-- I have to write it in capital letters because that's what you do in optimality theory. No final "k." No final "u," yeah? And those constraints would look at "ngaluk"-- or sorry, "wangalk," and they would say, no, we can't have the final "k." It'll be one "wangal." And they would look at "kandu." And they would say, no, can't have the final "u." It's got to be "kanda." It might have other constraints that are responsible for helping you decide the best thing to do with "kandu" is to change it to "kanda" instead of "kandi," or "kand," or any of a variety of other things you could have done. So it's not all that hard to have constraints that say there's something wrong with "wangalk." There's something wrong with "kandu." But it's not easy to figure out what to do next. So if you have those conditions that say Lardil cannot stand words that end in "k." or words that end in "u," that's the kind of thing we were saying about Japanese accent. Japanese doesn't like accent on the last syllable. So avoid it if you're making a compound. Those generalizations hold for "boomerang" and for "blood." But they don't hold for "story." The one about not having a word ending in "u" doesn't hold. Yes? STUDENT: You talked about no final "k," no final "u" as being part of optimality theory. NORVIN RICHARDS: Yeah. STUDENT: Is it possible to satisfy both if you were to do "ngala"? NORVIN RICHARDS: Mm-hmm. STUDENT: Does optimality theory support each of the rules? NORVIN RICHARDS: So that's just it. It doesn't, right? So the conditions-- the sense in which it has something like an order is that the conditions are ranked with respect to each other. There's the one that's most important and the one that's next most important, and so on. But if we were to build a tableau, we would have these two conditions. Don't have final "k." Don't have final "u." And we would start off with "ngaluk." And we'd consider keeping "ngaluk." And we'd consider getting rid of the "k" at the end. And maybe we'd consider "ngala," right? And this one would get knocked out by star final "k," right? Because it would end in "k." And this one would get knocked out by star final "u," yeah? And this one would pass both of those conditions. So if we just did a tableau like this, if we said, Lardil seems to not like this and not like that, and we have evidence for that in these other cases, it seems as though "ngalu" ought to become "ngala." But that's wrong. So the fact is this is the one we want to win. And it's hard to see, once we've posited these conditions, how we can get this to win. This seems like it ought to be better once we've posited these conditions. So the puzzle-- and there's something people work on is try to figure out, why is that second one better than the third one? Yeah. STUDENT: How about we rephrase the no final "u" rule? NORVIN RICHARDS: Yeah. STUDENT: We rephrase it as no final "u" unless it's original form ends with a "k." NORVIN RICHARDS: Ah, OK. So we could have a rule that says-- that's nice. We could say, no final "u." This is another way of saying what you just said. No final "u" that was final in original form. Yep. So we need more capital letters, but it's OK. So yes, if we state it that way, then this passes that version of it. And as long as we have some other condition that says, don't change vowels just for the fun of it, don't change "u" because you want to, which we surely need, right? If you give Lardil a word that has an "u" in it that's not at the end, it's not going to change. Then yes, then this would win. That's right. That is indeed one of the kinds of things people do in order to handle this kind of problem. They make these conditions more complicated because now the conditions we've had so far are all about the form that gets pronounced. They say things like the form that gets pronounced had better not end in a "k," or the form that gets pronounced had better not end in a voiced obstruent if we're doing Polish, or it'd better not have two stridents at the end of the word in a language like English. This is a more complicated constraint. It says it had better not end in this kind of sound if it ended in this kind of sound originally, before we started. So there's an actual debate going on in optimality theory about whether we want to allow ourselves that power, allowing constraints that make reference not just to the pronounced version but to an earlier version. But you're absolutely right. That's a move people make, people argue for. I'm bringing this up just to show you there's this tension between these two ways of thinking about phonology. And you've just raised one of the ways, one of the ways that, indeed, has been taken in the literature to try to figure out how to deal with this tension. But if you thought that linguistics was finished, like we understood everything, no. Yeah, so this is one of the kinds of things that linguists still fight about-- how to deal with these kinds of tensions. Yes? STUDENT: I think we can still think of it as an assembly line. NORVIN RICHARDS: Yeah. STUDENT: If we think of it as an assembly line, and we throw at that, we can make it like a priority then. NORVIN RICHARDS: Uh-huh. STUDENT: We can make it like an assembly language. NORVIN RICHARDS: Uh-huh. STUDENT: We can form rules. NORVIN RICHARDS: Yes. STUDENT: Actually enforce the priority. NORVIN RICHARDS: Yeah, yeah. Yeah, so that is another move that people make. Sorry, this slide has gone away. That is another move that people make, where they say, well, there are various ways of interpreting what you just said. One move that people made, especially early on when people were worried about this problem in optimality theory, was to say we're going to do something in optimality theory that allows us to simulate having rules in an order. So what we'll do is we'll have a word go through the tableaux more than once. Or the conditions can be in different orders in the different tableaux. So we'll have a tableau first that says final "k" is very bad, let's say. And it doesn't have final "u" is bad, right? And then we'll have another tableau that maybe doesn't have final "k" is very bad, but has final "u" is bad, right? And that's a way to simulate ordered rules, basically, is to say we go through tableaux multiple times. And the tableaux look different from each other. They have different conditions in them, or they have the conditions in different orders. That's one way to talk about it. You might have had a more clever idea, though, which is the tableaux-- I was talking as though these rules are ranked with respect to each other. You know which ones are more important than which other ones. But optimality theory-- people are generally talking not as though those happen one at a time. It's just you're considering all these possibilities. And you decide which one is best by all of these conditions. You sound like you're thinking about something different where you would say you'd go back to the simple one and you'd say, we start with "ngaluk." And what we want is something that says something like, well, I should have put star final "u" first. I'll just reverse these. We have all of our candidates here. And we first consider a star final "u." Uh-oh. But if we consider a star final "u," star final "u" will look at "ngalu," and it will knock it out, right? So that will be bad right there. And we won't ever get it back if we do that. So what we want is something more like using the tableaux to do something like ordered rules, something that says we look at this first thing. And we are now basically doing rules in an order. We look at this first thing and we ask ourselves, what's "ngaluk"? Well, it doesn't violate this first rule. And so we'll keep it. And then we'll consider the next rule. It does violate that rule. And so we'll change it. We'll make a minimal change to it. This is a way of converting tableaux into ordered rules, I think. That's what we're now doing. And it could be. These are all things to think about-- open research questions. Cool. OK, so the goal here was to get all of you to try to do better than linguistics has so far done. So right now we have these two ways of thinking about phonology. And there are in some kind of conflict with each other. And there's active work on trying to figure out how to get them to mesh. OK. Cool. Oops. Where are we? Here we are. Sonorants and obstruents, we did that. We did that. OK, so let me introduce you-- we're running-- yeah, yeah, yeah. Let me introduce you to another class of sounds. Again, the point is to show you ways in which it's useful to categorize sounds. And the way to try to convince you of that is to show you phenomena that care about certain classes of sounds and not others. Here are a bunch of Arabic nouns together with their definite article, which you can see is "al." English has borrowed a bunch of Arabic words that have kept this "al." So we have words like "algebra," for example, or "algorithm," where the "al-" at the end is this word for "the." So here are some Arabic nouns with the word for "the" in front of them. But there's this other class of nouns, where the word for "the" changes. It's no longer "al." What it gets-- it still starts with "a," but the consonant in it becomes a copy of the consonant that's after it. So the word for sun, which is [ARABIC].. The sun is [ARABIC]. It's not "al-" [ARABIC]. Here I am pretending to speak Arabic. Is there anyone here who actually speaks Arabic? OK, cool. I'll just keep pretending, then. OK. This is a classic observation. Arabic linguists noticed this about their language. They name and noticed the relevant condition is that there are certain kinds of sounds that get the phenomenon on the right, where the definite article ends in a copy of the first consonant of the following noun, and other consonants that don't do that. They're called "moon letters" and "sun letters" in the Arabic tradition because well, "moon" is one of the words in the column on the left. And "sun" is one of the words in the column on the right. It's a mnemonic for these classes of sounds. So the moon letters are letters like "kuh," and "fuh," and "k," and "h," and glottal stop. And the sun letters are sounds like "sh," "d," "z," "n," and "th." The sun letters-- yes. The sun letters are the ones that I've got in the box there. They are what are called coronals. So coronals are the interdentals and alveolars and the postalveolars. If you think about them-- so feel free to sit there making those noises to yourself if you want to-- what all of them do is make a consonant sound that's articulated with the tip or the blade of your tongue-- so if the front of your tongue is touching some part of the top of your mouth or other places. So the interdentals-- your tongue is sticking out between your teeth, or the alveolars, or the postalveolars-- the tip of your tongue or the blade of your tongue is coming into contact almost with the top of your mouth as opposed to, say, bilabials, where your tongue can do whatever the heck it feels like, or velars, which involve your tongue, or uvulars, which involve your tongue. But it's the back of your tongue, "k," or "g," as opposed to the front of your tongue, which is what you're using for things like "t," and "d," and "f," and "th" and "sh" and "zh." Is that clear? That distinction clear? And Arabic gives this a nice illustration of this. The coronal sounds are the ones that are the sun letters. They're the ones where the last consonant of the definite article just becomes a copy of the following consonant. Notice that the last consonant of the indefinite article underlyingly is "l," which is itself coronal. So it's like the end of the definite article is going to be coronal. If nothing else happens, it's going to be "l," But if there's a coronal right after it, it just becomes a copy of the following coronal. Does that make sense? That's what's going on in all of these examples. So with "moon," and "mare," and "book," and "war," and "father," the "l" of the definite article can't become a copy of the following consonant and remain coronal, whereas in the sun letters, the ones on the right, by copying the following consonant, the "l" of the definite article is still a coronal. [ELECTRONIC CHIME] Bling. Somebody had a great idea about that. So that's what's going on in these Arabic examples. A rule that we could have for the Arabic definite article-- yes, I just said this. Thank you. "l" is also coronal. A rule we could have for the Arabic definite article would say in the definite article, an "l" becomes a copy of a following coronal consonant. So we have a variable, consonant subscript "i" which says whatever the following consonant is if it's coronal, the "l' will also get the value consonant sub i, but it'll have the same form as the following consonant. Yeah? STUDENT: I had a question about coronals. So what I noticed is the sun letters are the letters that you can maintain for a long time, like "sh," and like the "sh," like [ARABIC] in sun, for "d" in [ARABIC]. NORVIN RICHARDS: Yeah. STUDENT: You maintain them for a long time. NORVIN RICHARDS: Yeah. STUDENT: And Arabic actually has [ARABIC],, which duplicates one letter. NORVIN RICHARDS: Yeah. STUDENT: So what I'm thinking is that when you can duplicate the letter, you just duplicate it. NORVIN RICHARDS: Uh-huh. STUDENT: With [ARABIC],, and other words. NORVIN RICHARDS: Yeah. STUDENT: But for [ARABIC],, just the moon, you can't do that. NORVIN RICHARDS: Yeah. But there are-- STUDENT: Do you see what I'm saying with that? NORVIN RICHARDS: I see what you mean. But look. There are consonants like "p," and "t," and "k," which are all stops. But "t" is coronal and "p" and "k" are not, right? And so all of those are equally hard to maintain for a long time, I think. Or similarly, I think what we're hoping-- and you can stop me if this is not true-- is that if there are Arabic words that start with "m," you'll get al before an "m," yeah? But for Arabic words that start with "n," you get a copy of the "n," yeah? And again, those are all equally easy to hold on for a long time. So this is a good example of science. You had a good hypothesis. And now we're finding data, yeah. And I should have called on you to do to say these Arabic words. Yeah, Joseph? STUDENT: So you mentioned that the English language has borrowed quite a few words from Arabic. NORVIN RICHARDS: Yeah. STUDENT: And we accidentally brought the definite article with it. So the word "algebra." NORVIN RICHARDS: Yep. STUDENT: "Algebra." At least, the English pronunciation starts with coronal. NORVIN RICHARDS: Oh, yeah. But in Arabic, I think it doesn't. So maybe you know. Do you know know what "algebra" is in Arabic? STUDENT: I believe it's [ARABIC].. NORVIN RICHARDS: [ARABIC] STUDENT: [ARABIC] NORVIN RICHARDS: Oh, OK. So I got it backwards. Yeah, so we kept the "l," We didn't do the change. So it starts with a "j," starts with a coronal. But it didn't do the change. I wonder whether we borrowed it before Arabic did this sound change. I have no idea. I don't know enough about the history of Arabic. Cool. Yep. All right. So this is one rule. It's a very common rule, where one sound becomes more like another sound. It's called assimilation. And in this particular case, it's what's sometimes called total assimilation, where one sound just becomes a complete copy of another sound. We'll see other cases where one sound just changes in ways that resemble a nearby sound. Why do you do assimilation? Languages do assimilation all the time. And it's not hard to have pretheoretical notions about why, right? You're saving wear and tear on your articulators. So" making it so that your articulators-- your tongue, your lips, your velum don't have to do as much moving around. Things don't move as far or don't move at all. It probably also makes perception easier in the sense that if you're speaking Arabic and instead of an "l"-- so let's take the sun. If you know that the word for "sun" is [ARABIC],, and you've got this thing before it, basically, it's as though you only need to give your hearer enough information to help them to know that they're hearing a definite article before the word for sun. And Arabic has decided it's enough if you hear the glottal stop and the vowel, and then a coronal. Underlyingly, that coronal, maybe, is an "l." But as long as we keep it coronal, we've given the hearer enough that they get to hear it. And in fact, we give the hearer, in a sense, two shots at hearing the beginning of the word for sun. So we'll lengthen that sound. We'll pronounce it twice. So we're redistributing the signal, in a way, to the hearer to make their life arguably easier, giving up some of what makes the definite article distinctive in order to emphasize some of the beginning of the word for sun. Yeah. Questions about any of this? Yes. STUDENT: So would you say generally that the focus is just to facilitate the pronunciation of the word easier. NORVIN RICHARDS: So I was trying to float the idea that it's partly to make the pronunciation easier. It's so that your articulators are not having to move as fast or as far. But it might also make perception, in some ways, easier in a case like this. So the idea is we'll give up some of the special properties of the definite article in favor of emphasizing or holding for longer the special properties of the word for sun, or whatever, the consonant that's at the beginning of the following noun. So we'll emphasize some parts of the-- in some ways, the least predictable part of this. You presumably hear definite articles all the time. It's not so important that you get lots of clues to the fact that this is a definite article. The nouns are going to vary. You'll hear more different kinds of nouns, yeah. Yep. OK, cool. This is probably a good place to stop. So let's stop here. And have a good weekend. And I'll see you guys next week. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_19_Semantics_Part_3.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: Last time I introduced you to something strange. This whole semester maybe has been kind of strange. But last time I tried to get you to take seriously the idea that in addition to the kind of movement that we were talking about at the beginning, the kind where you took some phrase and caused it to be pronounced in a place that wasn't the place that it started out, the way we were doing wh- movement, and NP movement, those kinds of movements where you can see why it's called movement. The result of movement is that the thing is no longer where it was before. That's kind of what you expect with movement. Last time I was trying to get you to believe that there's another kind of movement which exists and moves things, but which does not change the order in which the words are pronounced. There are several examples of this kind of movement. But the one I was introducing you to last time is called QR, which is short for "quantifier raising." So it was a proposal about how to get ambiguities-- like the ambiguity in "Everyone in this room speaks two languages," which we talked about. This could mean either everyone in the room is bilingual. Or there are two particular languages that everyone in this room has in common. And maybe some of them are bilingual, but maybe some of you speak three languages, or four, or five. So on one reading, it means there are two languages that everyone speaks. Maybe we all speak English and French. And some of us all speak other languages. In the other reading it means, for everyone in this room, the following is true. They speak two languages. Maybe we have no languages in common. So one of us speaks Swahili and German. And the other one speaks Burmese and Quechua, and so on. Yeah, so we all agreed that it had those two readings. And what I said was, there is a proposal, in fact, it's the default assumption in the literature, about how to deal with this kind of ambiguity, is to say, there is an optional operation that takes one of those quantifiers and moves it past the other one. And the order in which you interpret the quantifiers is determined by the position that they're in after you do that movement. But in English, this movement is invisible. So you have the option of moving "two languages" to a position above "everyone in this room." And if you do that, you only get the reading, there are two languages that everyone in this room speaks. And if you don't, then you only get the reading, it's true for everyone in this room that they speak two languages. So that string of words is ambiguous. But that ambiguity is, in the end, a structural ambiguity, just like the ambiguity for "I shot an elephant in my pajamas," So in "I shot an elephant in my pajamas," we ended up deciding that's a structural ambiguity. That is, that string of words has two different trees that it can be associated with. Then the two different trees have different meanings. And that's-- and we saw lots of evidence for that. The claim that's underlying quantifier raising is that this is also a structural ambiguity. That string of words has two trees that are associated with it. And the weird thing about QR is that one of the trees involves a movement operation that you can't see, that the claim is. One of the interpretations is related to a tree that, if you pronounced the movement, would be "Two languages, everyone in this room speaks." And the claim was English, well, does its QR in a way that you can't see. I showed you Hungarian apparently does its QR in a way that you can see. You get to see these quantifiers moving back and forth. And that's where we were. I'm sort of repeating that partly to make sure that nobody is thinking that there was something that they were supposed to understand that they didn't. This is supposed to seem creepy at this stage. Here I am saying that things are moving in an invisible way. You're not supposed to come out of that thinking, oh yeah, sure. Are there questions about that? Are you all in the appropriate emotional state? Yes, I just said all that, thank you. So this is our first case of covert movement. And because it's late April, it's probably the only kind of covert movement that we'll have a chance to talk about. There are other kinds, but this is one of the big ones. So I gave you some reasons last time to take seriously the idea that QR exists. And we even talked a little bit about some of the conditions on it. So, for example, we saw that QR cannot get out of a TP. So it can't get out of the clause that it's in. We saw some evidence for that. What I want to do today is show you another condition on it. This is something that my colleague Danny Fox discovered. And it's I'm telling you about it because I think it's really cool. So here's a condition on QR, which is really interesting. Here's another case of QR, so "Someone loves everyone." If you do QR, then this means no one is unloved. So everyone has this property, someone loves them. If you don't do QR, then it means there is a particular person, my grandma maybe, who loves everyone. So there are these two quantifiers. And by doing QR, you change the relationship between them in a way that changes the meaning of the sentence. Looking at this, you might wonder, is QR just always optional? So suppose I am looking at "John loves everyone." Well, here there's only one quantifier. So you might have thought that the simplest theory, the easiest theory would be one that said, whenever you have a quantifier, you can optionally do QR to it or not. And if you do QR to it, then if it goes past another quantifier, then congratulations. You've changed the meaning. And if not, well, I hope you enjoyed doing QR, but it didn't do anything. You could imagine that it would work like that, that QR would just be something that you could always optionally do. My colleague Denny Fox has discovered that this is not how it works. So this is a case where QR wouldn't do anything. And what Fox discovered is that when QR wouldn't do anything to the meaning, it's not possible. It's going to take a while to show you the evidence for that. We're going to have to develop another detector for QR. But we'll do that, and then we'll be in a position to demonstrate that this is true. So let me show you. The detector is going to involve a phenomenon called VP ellipsis, which I think we've touched on every so often, but let's touch on it again. So sentence like "John bought a book and Mary did too," ellipsis refers to this kind of thing you can sometimes do where you leave out part of the sentence. You just don't pronounce part of the sentence in this case. This is called VP ellipsis because there where I've got that underscore, there would have been a VP, but there isn't. Sentences like this, the missing verb phrase gets interpreted as being the same as some other verb phrase that's salient. Often it's a verb phrase that someone has said. So "John bought a book and Mary did too," this has to mean something like John bought a book, and Mary bought a book too. So the missing verb phrase is understood as being the same as the other verb phrase that you can see. Yeah, is this clear enough? Lots of interesting work on what exactly counts as the same. So there are some clear cases. If I say "John bought a book and Mary did too," Mary-- what Mary did was buy a book. You don't just get to randomly put in a verb phrase, your favorite verb phrase. It doesn't work that way. There are places where there's more wiggle room and what counts is the same. So if I say "John dislikes his father and Bill does too," that can mean a couple of different things. What's one thing it can mean? [INAUDIBLE] AUDIENCE: John and Bill know each other somehow. And John dislikes his father. And Bill hangs out at John's house too much and also dislikes John's father. NORVIN RICHARDS: Yeah, so John dislikes John's father, and Bill also dislikes John's father. It is, indeed, one thing it can mean. What's another thing it can mean? AUDIENCE: Bill dislikes his own father. NORVIN RICHARDS: Yeah, John dislikes his own father. And Bill dislikes his own father. That's another thing it can mean. So the same, what counts is the same is an interesting question-- yeah. AUDIENCE: Could it also, depending on context, be that they both dislike a third person's father? NORVIN RICHARDS: I think it probably could, yeah, yeah, great. So Fred has a father that everyone dislikes. John dislikes his father and Bill does too-- yeah. AUDIENCE: The first two [INAUDIBLE] brothers. NORVIN RICHARDS: That's true, that's true. But if they're not, then they're not. Yes, no, you're absolutely right. Yeah, yeah, so when I say there's interesting stuff to talk about when we say that they have to be the same. So if John dislikes John's father and Bill dislikes Bill's father, are they doing the same thing? Maybe, kind of. They're both engaged in dislike of one's own father. Maybe that's what makes it close enough. And so there's a big literature on figuring out how to define the same in the relevant sense, which we won't have to worry about too much. But it's worth bearing in mind that there's this requirement. In fact, this requirement is crucial for the argument I'm going to give you in a second. So back to the guards. Remember this one, "A guard is standing in front of every building"? We said this has two readings, a normal reading where every building is guarded. And then you guys were very creative with coming up with other creative versions of this where a single guard is somehow guarding all the buildings at once. Maybe there's one very wide guard. Maybe-- somebody was saying, yeah, maybe the buildings are all facing each other. And one guard is standing in the geometric center between all the buildings, yeah. Whatever, anyway, either there are as many guards as there are buildings, which is kind of the normal reading for this sentence, or there's one guard who is standing in front of every building, the kind of guard who, when he stands around the house, he stands around the house. So these are the two readings. Now, let's think about a sentence like, "An American guard is standing in front of every building, and a Canadian guard is too." So the first half of that can presumably mean two things. Yeah, me neither. There's one American guard for each building, or there's one very wide American guard. And the second half of that sentence can presumably also mean two things once you put in the ellipsis. So can the whole thing mean four things? I don't think so. In fact, it's not just me. People who know much more about semantics than I do don't think so either. So the four imaginable readings, the easy ones are, each building has two guards, one American and one Canadian, or there are two very fat guards, one American and one Canadian. The other logical possibilities where each building has one normal American guard and there is one gigantic Canadian guard, or the other way around, I don't think it can mean those things. And some of you were shaking your heads in ways that suggested that you agree with me. Is there anybody for whom the things I've crossed out they were like, yes, that's what it means. What is wrong with all of you people? So this looks like a place where the condition that's-- one way of interpreting this is to say, the condition that says when you do VP ellipses, the elided VP has to be the same as another VP is kind of constraining what we can do in a particular way. So, again, the ambiguity in the first half of this sentence we decided had to do with whether you do QR or not. You can do QR of "every building" to a position above "an American guard." And then you get the normal reading, every building has an American guard in front of it. Or you cannot do that, in which case there has to be one American guard who's standing in front of every building. And what we're learning is-- from crossing out the two readings that I've just crossed out, what we seem to be learning is if you do QR in the first half of this sentence, you have to also do it in the second half. And if you don't do it in the first half, you have to not do it in the second half. Does that make sense? That is, you either get the reading where you did QR in both, each building has two guards, or the reading where you did QR neither. There are two gigantic guards. You don't get the mixed readings. And we can think of this as another instance of this requirement that if you're going to do VP ellipsis, you're elided VP has to be the same as your other VP. So if you do QR in one, you have to do QR on the other-- yeah. OK, well this is kind of handy because what it means is-- so if parallelism extends to QR, if you do QR in one conjunct, you have to do it in both. What it means is, we now have another way of detecting QR. That is, we have a way of finding out whether you're doing QR in one clause. And that goes beyond just sitting and thinking about what the sentence means. So if we set up this kind of context where we've got VP ellipses, we happen to know that you can only do QR in clause number one if you do it in clause number two. And that means that if we have a way of determining whether QR is happening in clause number one, we'll know whether it's happening in clause number two, regardless of meaning. Does that make sense? This will be handy for answering the question that I posed earlier on. So consider this sentence, "John is standing in front of every building." Well, it's an odd sentence, but it doesn't matter whether you do QR or not because "John" is not a quantifier. He's just some dude. So QRing "every building" past "John" won't change anything. This sentence is always just going to mean every building has this property. If you look in front of it, there is John. That's the only thing this sentence can mean, doesn't matter whether QR happens or not. And so the question that I posed for you earlier was, if QR wouldn't change anything, can it still happen? And is QR-- in a way, our lives might be easiest if the answer was yes. We would get to say, QR is something that just randomly happens. You just always get to take quantifiers and move them to higher positions just for fun. This is what they do. They move. We're about to see, I think, I already spoiled this, that that's not how it works. Actually, you can only do QR if you have two quantifiers. And you want to get one of them to be higher than the other, and if QR is going to affect the meaning. And I'm about to show you the evidence for that-- yeah, Joseph. AUDIENCE: What would be the set operation here, the building that John is standing in front of? NORVIN RICHARDS: So, yeah right, so what are the sets? So "every" is relating the set of buildings, so the set of things that John is standing in front of. And it's saying every member of the set of buildings is a member of the set of things that John is standing in front of. So if you look at every building in the set, you will see John standing in front of it. That's what it says. And it doesn't matter whether you do QR or not. Yep, so a question, can you do QR? Answer, no. And here's how we find out: "John is standing in front of every building, and a guard is too." Now we have a way of finding out whether you can do QR in the first conjunct because in the first half of this sentence, John is standing in front of every building. We can find out whether QR is happening there because we know that in this kind of sentence you can only do QR in the first half if you do QR in the second half and vise versa. And although it doesn't matter whether you do QR or not in the first half, it does matter in the second half. In the second half, a guard is standing in front of every building. That's our core QR sentence. We know that whether you do QR or not changes the meaning of the sentence. If it were possible to do QR in the first clause, then the second clause would be able to have a reading with QR. It would be possible for the second clause to mean every building has a guard in front of it, maybe different guards. Is that a reading it can have? So I think if I say-- you're shaking your heads appropriately, I think-- if I say "John is standing in front of every building and a guard is too," that can only mean that John and the guard are both very large, or both standing in the geometric center between the buildings that are around [INAUDIBLE].. It means one of those things. It doesn't mean, for example, there's one gigantic guard-- sorry, doesn't mean there's John, who is gigantic, and every building has a guard-- thank goodness because John-- somebody should keep an eye on John, clearly. It can't mean that, which is kind of striking because that's the normal meaning for a guard to standing in front of every building. I mean, we're only entertaining the reading with the gigantic guard because, well, we're doing intro to linguistics. And this requires us to think about peculiar things. But that meaning goes away in this sentence. The normal meaning goes away. This sentence is odd-- odd not just in the first conjunct, but in the second conjunct. And this requirement of parallelism, this requirement that when you do VP ellipsis you can only do QR in one half of this clause if you are doing it also in the second half, this is, as I said before, giving us a new detector for QR. So we now-- in order to know whether we can do QR in "John is standing in front of every building," we don't have to-- we already know that it doesn't do any good to sit and stare at that sentence and try to think about what it would mean if you did QR or if you didn't. It doesn't change the meaning. But now we have a new detector for QR besides staring at meanings. We get to look at these parallelism examples. And what these teach us is, yeah, you can, in fact, do QR. And John is standing in front of every building. So cool fact, QR only happens if it changes the meaning, only happens if it's going to invert the scope of two quantifiers. OK, that's it for that bit about QR. New bit about QR, and then we're just about done with QR I think, which is also about VP ellipsis. I want to show you one other bit of an argument that QR exists, that it is a thing, that quantifiers actually are moving. There's a phrase which is moving, just in a way that you can't hear. So there's a movement operation that doesn't change the order of the words. I want to show you a new bit of evidence for that. This bit of evidence also involves VP ellipsis. So just again, to remind you, VP ellipses, these are cases like this, the ones we've been talking about, where you've got a verb phrase that's missing. And it's interpreted as being the same as another verb phrase. So "John bought a book and Mary did too" means John bought a book and Mary bought a book too. And then there's this identity condition that says that your elided VP has to be the same as another VP. So in this case, it has to be "buy a book." There's interesting work on this actually because there are contexts where you can elide a verb phrase even though no one has uttered another verb phrase. So there's-- apparently there were a pair of linguists who used to demonstrate this at conferences. They gave a famous talk in which one of them advanced on the other one with a meat cleaver. And the other, his coauthor, would say "Don't, don't!" which involves VP ellipses. And no one has uttered the VP which he is trying to convey. It's just clear from context that what he means is "Don't hit me with a meat cleaver." But anyway, in these kinds of sentences where there is another verb phrase, the identity condition shows up. The two verb phrases have to be the same. Now, let me introduce you to an interesting kind of example. This is called antecedent-contained deletion. So let's think about sentences like "John will visit every city that Mary did." So there's VP ellipsis here. You can see after "did," there's this gap. What verb phrase are we going to fill in for that? Let's think about that. I think I next show you a tree. So here's a tree. "John will visit every city that Mary did." There's a little blank there. So that blank needs to be the same as a verb phrase. So we could fill in that verb phrase. But if we fill in that verb phrase, what we'll end up with is, "John will visit every city that Mary-- visited every city that Mary-- visited every city that Mary--" we will never stop interpreting this verb phrase. Do people see that? That's why it's called antecedent-contained deletion. There's an elided verb phrase that's contained in a verb phrase that you appear to have to copy into the verb phrase that's missing. And it's unclear why you can never stop thinking about these sentences. Yeah, you ought to be stuck. And yet we don't have the sense that we are stuck when we hear this sentence. "John will visit every city that Mary did." You're like, yeah, I know what that means. So it's possible that you're wrong. You don't know what it means. But that doesn't seem right. Our intuition is that, yeah, it's not hard to interpret. If we believe in QR-- yeah, this way lies madness. Yeah, so we can't interpret the sentence this way. Is this clear what the problem is? If there is such a thing as QR, so we'll do quantifier raising, we'll move that noun phrase out of there. Well, now there is a verb phrase, "every city"-- so "Every city that Mary did, John will visit." Now there's a verb phrase, visit, that we're going to put there. So it'll mean, Every city that Mary visited, John will visit. And that seems to be what it means. So it means John will visit every city that Mary visited. So here's another reason to take seriously the existence of something like QR, that is the idea that there is an operation that moves quantifiers out of the position where they were and moves them to some higher position, in this case moves a quantifier, like "Every city that Mary did" at least out of the verb phrase. I attached it to the TP, doesn't matter-- moves it out of the verb phrase. It's what makes it possible to interpret things like antecedent-contained deletion, which seemed to exist. And so we want an account of why it can exist-- yeah. AUDIENCE: Are we allowing for [INAUDIBLE]?? NORVIN RICHARDS: I should have drawn this tree differently. One of the problems with QR being invisible is that it's kind of hard to say what it looks like after you did the move. I'll try to fix this before I put the slide up. It would have worked just as well for me to create a new TP binary TP node on top of a earlier TP node. So don't allow yourselves to be distracted by the ternary branching in this slide. I'll try to fix that. Any other questions about this? This is another argument for this. At this point, all I hope to have done is to have gotten you to be willing to entertain the possibility that you would not have to be crazy to believe in the kind of movement that you cannot see-- [SNEEZE] Bless you. There is a lot of work-- [SNEEZE] --bless you-- on covert movement, so movements that you cannot see, developing tests for where it goes, and where it lands, and what drives it, and why you don't get to see it. It's one of the big areas of linguistics, trying to understand what this is. And if this is a topic that interests you, I encourage you to take more linguistics classes. This is a major topic, something we work on. So that's it for QR. I'm not going to raise any more quantifiers. I'm going to show you something else. This will also be about quantifiers, but they will not be raising. We'll be able to interpret them right where they are. Let's consider some properties of quantifiers. Some proper-- some quantifiers have a property which is of interest. They are what's called downward entailing. What does that mean? It's a fact about these sentences, so "No American smokes" and "No American smokes cigars"-- first of all, they're both false. But if we think about the entitlement relations between them, remember that we've said what quantifiers do is relate two sets to each other. In this case, "No American smokes" relates the set of Americans to the set of things that smoke. And it says the intersection of those is empty. And the second sentence relates the set of Americans to the set of things that smoke cigars. And it says the intersection of those things is empty. And here's the thing, the set of things that smoke cigars is a subset of the things that smokes. If you smoke cigars, then you smoke. If you smoke, you don't necessarily smoke cigars. You might smoke something else, cigarettes, say. I feel weird talking to undergrads about this because this is all antiquated technology presumably. I assume none of you smoke anything. But, anyway, back in the day, people smoked various things. So now observation, there's an entitlement relation between these two sentences. Which one entails the other? Yes. AUDIENCE: The first entails the second. NORVIN RICHARDS: Yeah, the first entails the second. If it's true that no American smokes, then it's true that no Americans smoke cigars. If it's true that no Americans smoke cigars, is it true that no American smokes? Maybe not. Maybe they all smoke cigarettes. OK, cool. So this is a quantifier that's what's called downward entailing. That is if you take the second set and you make it smaller, you switch to a subset, then you get an entailment relation. So the second set, the set of smokers, if we switch from the set of smokers to the set of smokers of cigars, then we get an entailment relation between the two sentences. If the first sentence is true, the second sentence has to be true. That's a property of the quantifier "no," that it's downward entailing. I just said all that, thank you. So first sentence entails the second, and the second sentence does not entail the first. Or to put it another way, if we look at one of these Venn diagrams, we've got the set of Americans and the set of smokers. And "No American smokes" says that the intersection of those is empty. There's the set of cigar smokers. And "No American smokes cigars" says that the intersection of those is empty. And if the first of those is true, the second has to be true. And you can see by looking at all the circles-- yeah, that's a Venn diagram. So "no" is downward entailing. That is, if it's true that no A are B, and if C is a subset of B, then it is also true that no A are C. Yes, yes, yes-- is "every" downward entailing? So is it the case that if every American smokes, does it follow that every American smokes cigars? No. so not all quantifiers are downward entailing. "No" is downward entailing, but "every" is not. In fact, is "every" upward entailing? That is, if it's true that every American smokes-- sorry, if it's true that every American smokes cigars, is it true that every American smokes? Yeah. So there are downward-entailing quantifiers and there are upward-entailing quantifiers. Now, why am I telling you all this? Partly just because it's entertaining, but also consider sentences like "No one lifted a finger to help," or "No one contributed a red cent," or "No one saw anything." These first two sentences in particular have literal meanings. So the first sentence can mean no one did this, and had that somehow help. It could mean that. But it also has an idiomatic meaning. It's something like, nobody did anything at all to help. Nobody made the least effort to help. Similarly, the second sentence-- do people still use this expression?-- It's an expression people have heard, no one contributed a red cent. So it can mean literally, nobody put in a red penny, a penny that had been painted red. But it can also mean no one contributed anything at all. No one contributed any money. And no one saw anything. It means no one saw anything, OK, cool. Now, interesting fact about these kinds of expressions, they're picky about where they can show up. So you can say "No one lifted a finger to help," or "No one contributed a red cent," or "No one saw anything." But you cannot say "Everyone lifted a finger to help" unless you mean the literal meaning. So can mean everyone in unison like this helpfully, it can mean that. But it can't have the idiomatic meaning. It can't mean no one did anything. Ditto for the second sentence, "Everyone contributed a red cent"-- OK, maybe you ended up with a pile of pennies that were painted red, but it can't have the idiomatic meaning. So these kinds of expressions, "lift a finger," and "a red cent," and "anything," have these funny constraints on their distribution. And so far, so we've said that "no" is downward entailing and that "every" is upward entailing. And so we could imagine various ways of accounting for the fact so far. We could say, for example, that these are expressions that need to be near a quantifier which is downward entailing, or that they need to not be near a quantifier which is upward entailing. So given the data that we have so far, "everything"-- both of those would be stories we could tell. Yes. I'm sorry, I can tell we're in the part of the semester where everyone is tired. And I've just assigned you lots of things. And also, I am talking too much. So I should do more standing here, like allowing you to put these things in. So here, let me try to fix that with this next slide. Here are some quantifiers. Here are some quantifiers. "No," "every," "few," "a few," "more than ten," "less than ten," and "exactly ten." This is chalk. I can tell that if I use this piece of chalk, I would be scratching the board, which will not be fun for anybody. So we're going to figure out which of these are downward entailing. And then we've got "no" and "every," where we know that "no" is downward entailing and "every" is not downward entailing. I put this in the same order-- then we've got "few," and "a few," and "more than ten," and "less than ten," and "exactly ten." OK, cool-- now let us consult our intuitions about this. Is "few" downward entailing? So what we're asking is, so for to say that "no" is downward entailing is to say that "No Americans smoke" entails "No Americans smoke cigars." Let me just pause to allow you to be thankful for the fact that we live in the age of slides as opposed to the age-- which wasn't so long ago-- where my job would be to write everything on the board. And then your job would be to write down the things that I wrote. Your lives are better than they would have been not so long ago. Maybe my handwriting was better back then, who knows. So "no" is downward entailing because it's true that if no Americans smoke, it must also be true that no American smoke cigars. How about "few"? So what's the relationship there? If I say few Americans smoke, does it follow that few Americans smoke cigars? I think it does. So "few"-- there's a reason that when we were talking about sets, I was careful to show you quantifiers like "no," and "every," and "some." "Few" is a little harder to describe in terms of sets. It's something like "The intersection of these sets has a small number of things in it," something like that. How about "a few"? So if a few Americans smoke, does it have to be true that a few Americans smoke cigars? No. Maybe there are a few Americans who smoke. And all of them smoke hookahs or whatever-- yeah. AUDIENCE: Would that not also be true for "few"? NORVIN RICHARDS: A few. AUDIENCE: That few Americans smoke, that all the ones that do smoke cigarettes. NORVIN RICHARDS: What do people think? If I say "Few Americans smoke," do we get to conclude that few Americans smoke cigars, or is it possible that all of the few Americans who smoke, smoke something else? Hmm, I'm going to put a question mark next to "few" because I think that's a good question. For "a few," I think it's pretty clear that the answer, the answer is no. This relation does not hold. OK, how about "more than ten"? So more than ten Americans smoke. Would it follow that more than ten Americans smoke cigars? No. Was that a question-- just stretching, OK. If less than ten Americans smoke, do less than ten Americans smoke cigars? Yes. And if exactly ten Americans smoke, do exactly ten Americans smoke cigars? No, cool. All right, good. Let's see what the slide says. Yeah, OK. So for the slide, I ended up claiming that "few" is downward entailing. It's interesting to think about the case that you're raising. Trying to figure out what we think. Doo, doo, doo, doo, doo-- now, oh yes, so now having done that, let's think about these expressions. So we've already said, it's OK to say "No"-- let's see, "No American did anything." But expressions like "anything," which are weirdly sensitive to the properties of the quantifiers that are around them, they want to be near a downward entailing-- maybe that's one theory that we could fool with-- they want to be near something that's downward entailing. So I'll put a column here for "anything." And we've said that "anything" is OK with "no," and it's bad with "every." So "Every American did anything," that's not possible. How about "Few Americans did anything"? It's fine. How about "A few Americans did anything"? No. How about "More than 10 Americans did anything"? "More than 10 Americans did anything"? No. And then "Less than 10 Americans did anything"? I think, yep. "Exactly 10 Americans did anything." Some of you are shaking your head, and some of you are nodding. "Exactly 10 Americans did anything about the problem." You're all wiggling at me. [LAUGHS] OK, so-- when I once taught in a summer school in India, there WAS a wonderful summer school. There all these brilliant students. It was really great. We were in the foothills of the Himalayas. It was this gorgeous spot. And I had a great time teaching in the class for all of these reasons. Also, the food was wonderful. Although whenever I said that to the Indian students, they would look at me and smile because clearly I had no idea what good Indian food was supposed to taste like. But one of the fun things, many fun things about teaching there was that, as I said to them at one point, I come from a place where we have several things we do with our heads. We can say, do "yes" with them, and we can do "no." And they did those things in India. But they also did this. And I could never figure out what they meant. [LAUGHTER] Sometimes they just meant, wait, we will think about that some more. It seemed to be the equivalent of "Hmm." Anyway, OK, so these columns, even taking my bad handwriting into account, these columns kind of resemble each other We have checks and X's in more or less the same places. The one place where there might be a difference has to do with "Exactly ten." So this was the place where you were wiggling it at me. "Exactly ten students did anything about the problem." "Exactly 10 students lifted a finger to help." What's another example like this? Oh, "ever." Yeah, so "No one has ever eaten nattoo with avocados." It's not true. This is fine. But "Everyone has ever eaten nattoo with avocados" is bad. "Few people have ever eaten"-- blah, blah, blah-- is fine. "A few people"-- no. How about "Exactly ten people have ever eaten nattoo with avocados"? I think that's OK. It's interesting that our intuitions about this are kind of shaky, caused us to wiggle. Notice that this is the place where these two quantifiers, or these two columns stop resembling each other quite so closely. Up until that last line, you could get away with thinking these expressions, things like "ever," and "anything," and "lift a finger," they are things that want to be around something that's downward entailing. And those of you who were shaking your heads when I asked about "Exactly ten Americans did anything about this," you should be proud of yourselves, because your lives are easy lives. So that's bad. This is bad. Everything is good. On the other hand, if it's true that it's possible to say things like "Exactly ten people have ever eaten nattoo with avocados," then yeah, there's something else going on here. If we do the same exercise for upward entailing-- I won't make you do it-- what you can see is that the upward-entailing column is almost the mirror image of the downward-entailing column, but not quite. So "Exactly ten" is the one where they split. So upward entailing, again, what we're asking is-- if I say-- so we can do it first with "no": "No American smokes cigars." Would it follow from that that no American smokes? For "no," the answer is no. So no, it is not upward entailing. "Every" is upward entailing because if every American smokes cigars, it's also true that every American smokes. If exactly ten Americans smoke cigars, does it follow that exactly ten Americans smoke? No. So there could be a few more that smoke other things. So "exactly ten" is unlike all the other quantifiers in this list, in that it is neither downward entailing nor upward entailing. And if we ask, does it license an NPI, the answer is kind of wiggly. So this is a place where we might wonder whether license-- I think I led us into this when I first introduced you to these expressions-- sorry, NPI is short for negative polarity item. These are these expressions, like "anything," or "ever," or "lift a finger," which are sensitive to the nature of the quantifiers that are around them. And tables like this are the kind of thing people construct when they're trying to figure out exactly what is it that they're sensitive to. Do the want to be near something downward entailing, that's one popular theory, or is it maybe they want to avoid being near something which is upward entailing? And those are almost the same thing, as you can see in those first two columns, but not quite. Quantifiers like "exactly ten" are helpful. Janice, Janice has worked on this. AUDIENCE: [INAUDIBLE]. There was an observation that I wanted to make about the "exactly ten." NORVIN RICHARDS: Yeah. AUDIENCE: But it seems to me that if you created a context where you said, when you put out a call for 100 people, volunteers, and exactly ten [INAUDIBLE],, something like that, versus put out a call for ten volunteers, it seems like that works better. But if you say, you put out a call for ten volunteers, exactly ten did anything, then it seems like it doesn't-- it gets worse. NORVIN RICHARDS: I think I get that. Does everybody get that? So those are-- that's a nice example, thanks. So what you're pointing out is that it's helpful for the exactly ten to be explicitly a subset of another set that we've got hanging around or something like that. It's a partitive expression of some kind maybe, cool, cool-- yeah. AUDIENCE: So in that case, could we say that "exactly ten" only behaves in that way when exactly could be replaced with only? NORVIN RICHARDS: Ah, well, let's see. That's a nice way to think about it. So the point about "exactly ten," isn't it, is that in those kinds of contexts is that this was true of ten of them, but not of the others, something like that, yeah. And so maybe that is the way to think about it. "Only" is another interesting expression to think about here, maybe, but let's not, yeah. Cool, so especially if we take Janice's suggestion and securely think of "exactly ten" as being OK with anything, being the thing which stands out here in this last row, then maybe we want to think of these expressions as expressions that want to be in a context that's not upward entailing. That's a move people make. I can tell you that as you go through life, if you interact with linguists and they are not semanticists, they will often tell you that these are expressions, things like "lift a finger," and "a red cent," and "anything," and "ever," that these are expressions that need to be in a downward-entailing context, because it's almost true, and it's shorter than a non-upward-entailing context. But it's arguably not quite true. This is maybe closer to what these things mean. OK, how about these expressions? So if I say "John saw anything," or "John contributed a red cent," or "John lifted a finger to help," these are all bad. And maybe that's not so surprising. John isn't a quantifier at all. And in particular, if we ask, if we pretended that John was a quantifier, I guess, if we ask is "John" upward entailing, I guess there's a sense in which he is, if "John smokes cigars" entails that John smokes. But consider sentences like the first ones. "John didn't lift a finger to help." Or "John didn't contribute a red cent." Or "John didn't see anything." What we're seeing here is that it's possible to license those expressions not necessarily with the quantifier, but with this negation. The negation is another kind of thing that makes these OK. That's the reason that they are called negative polarity items. It's because this is-- the first observation about them, that they were things that were OK in negative sentences but usually bad in positive sentences. Then people were like, wait, there are some quantifiers that can license them too. People were [INAUDIBLE]. Yes, great, so if John smokes cigars, that does entail that John smokes. But if John doesn't smoke cigars, that no longer entails that John doesn't smoke. If John doesn't smoke cigars, it's possible that he does smoke, just not cigars. So what negation does is create a non-upward-entailing context, which is what these quantifiers want. So these expressions are what are called negative polarity items. They need to be in a non-upward-entailing context. And this has just been another example of the kind of thing that people work on. If you are looking around for things to work on for your final paper, for your field work paper, this is the kind of thing you can go look for. Most languages have NPIs. So you could spend some time asking about how to say things like "He didn't do anything," and finding out what kinds of expressions you use. So that's the kind of thing you could do. Questions about any of this? Yes. AUDIENCE: So [INAUDIBLE]. About "less than ten," so "less than ten people [INAUDIBLE]."." NORVIN RICHARDS: I'm sorry, say it again. AUDIENCE: So "less than ten people did anything [INAUDIBLE]." But what about "less than 10 billion people did anything?" NORVIN RICHARDS: "Less than ren billion people did anything." [SNEEZE] Bless you. Maybe-- so I don't know, what do the rest of you think about that? Is that a-- is that an odd sentence? Joseph. AUDIENCE: I think if we are able to imagine a world where there's trillions of people and you colonize the entire galaxy, and then some massive disaster happens, and one little planet of ten billion people signs up to help, then maybe that's [INAUDIBLE].. NORVIN RICHARDS: Yeah, so I wonder whether this has to do with the conditions under which it makes sense to use an expression like "less than ten billion people." So the way things are right now, "less than ten billion people" would be, well, all the people that there are. And so this is connected to other questions about pragmatics, which we have skipped and will continue to skip, about which quantifiers you will use under which kinds of circumstances. So there's work on the fact, for example, that if I say something like "Mary ate some of the cookies," that you are inclined to interpret that as meaning that she ate some of them, but not all of them. So if I say "Mary ate some of the cookies," and then you go over and discover that the plate of cookies is empty, you have a tendency to be discouraged. But if "some" just means the set of, "Mary ate some of the cookies." If "some" just establishes a relation between the cookies and the things that Mary ate and says, the set of cookies and the set of things that Mary ate has a nonempty intersection, if that's all it means, then if Mary ate all of the cookies, then this should be true. And there are two things we could do about that. One would be to say, well, we need a different definition of "some" then, because that's not our reaction if it turns out she ate all of the cookies. But there's another move that people standardly make, which is to say, if "I say Mary ate some of the cookies," what you do is you listen to me say that, and you mentally bear in mind the fact that I said "some," and I didn't say "all." And you're thinking about other kinds of things I could have said. And if I had said "all," I would have been, in a sense, more informative. I would have been telling you that there are no cookies left. Yeah, so on that approach, we're OK. Our existing definition of "some" is OK. We get to just say, yeah, "some" just means there's a nonempty intersection. And then there's this tendency that you have to listen to a quantifier and think about other quantifier that you could have said. And I think there's something similar going on-- I'm sorry, this is taking a long time-- I think there's something similar going on with your example. But if you say "less than ten billion people," I'm invited to wonder why you are describing all existing people that way. Why aren't you just saying "everyone"? And I think there's something similar, some way of relating that fact to this fact I think-- yeah. AUDIENCE: So you say there is a principle that we assume that when we have conversations that the other person [INAUDIBLE].. NORVIN RICHARDS: Yes, yes, and then there's a really interesting question about what I mean when I say informative. We have to try to figure that out. That idea is an idea by a guy named Grice. His idea was that when you hear people speak, you make assumptions about the conversational moves that they will make. You assume that they will be honest and that they will tell you everything. They will be maximally informative, and so on. And some of the interesting kinds of inferences that we make about things that people say come from cases where we're following these assumptions about what people will be doing-- [COUGHS] excuse me. And one of the other things that he's interested in showing is he'll-- so we will do things which obviously violate Grice's-- they're called maxims, Grice's maxims. And then you get to draw conclusions when I do those kinds of things. So there's a classic example. If somebody asks me, "How did the students do on the test? and I say, "Well, Mary passed." [LAUGHTER] That tells you that Mary passed. And the fact that I didn't completely answer your question leads you to make conclusions about why. You get to ask yourself, so why didn't he tell me about any of the other students? Oh dear. And the answer-- there could be various answers. Maybe the answer is everyone else failed. Maybe the answer is I happen to believe for some reason that you're entitled to know how Mary did, but not about anybody else. Maybe you're Mary's dad or something. There are various ways that-- conclusions you could draw, depending on the circumstances. But that's another kind of example of Grice's maxims in action. This is a domain of linguistics, which I'm not planning to talk about at all, and yet here I am. It's called pragmatics. And I'm not planning to talk about it because, I think I've said this before, when you're teaching intro classes in anything-- this is probably true of your other professors and your other intro classes-- the professor presumably has a specialty, something that they mainly work on. And for the rest of the class they're doing their best to make everything look reasonably plausible. And I think I told you, the part where I have a specialty, that was syntax. So everything else-- I haven't been making it up. I've been doing my best to show you things that were really true. But I'm not a semanticist. And I so I'm a syntactician. I'm not a semanticist. And I am really, really not a pragmaticist. And there are people who work on that, and I'm just not one of them. So I I've already told you more about pragmatics than I know. [LAUGHTER] Well, I won't try to tell you anything else. Your TAs, for example, might be able to undo some of the damage tomorrow. AUDIENCE: So by communicating [INAUDIBLE]---- NORVIN RICHARDS: You're going to ask me about pragmatics, aren't you. [LAUGHS] Go ahead. It's all right. AUDIENCE: I'm trying to understand what you said about Grice. So if you say, "I go to school," and when someone asks where you go to school, you say "North of Boston," which is true. Then if I went to school in, like, Greenland, it's north of Boston. But if that person found out that I do actually go to school in Greenland, they'd think that I was being dishonest. NORVIN RICHARDS: Yeah, they would feel that-- yes, exactly. Yeah, that's a good example, particularly if-- there was some reason to think that when you said "North of Boston," you were leaving open the possibility that you were going to school in various fine educational institutions north of Boston. I probably shouldn't try to name any in particular. And then if it turns out that, in fact, you were in someplace in Greenland that's not all that famous, they might think, ah, I've been tricked. Yes, right. You're absolutely right. They get to ask why you did that. Why were you vague when you could have been specific? Yeah, anything else I can make up about pragmatics? Yes, Raquel? AUDIENCE: I was thinking about the-- and I don't know if this is iconic or not. But there's this one scene in Pink Panther where he says, "Does your dog bite"? And then the guy's like, "Nope." And the dog bites. And he's like, I/" thought you said your dog didn't bite." And the guy's like, "That's not my dog." [LAUGHTER] NORVIN RICHARDS: "That is not my dog." It's a classic. Yeah, so a lot of comedy is built on failing to obey Gricean maxims. Yeah, that's exactly right Yeah, so in this case, the maxim of relevance-- so when he asks, "Does your dog bite," the guy is supposed to think, "Why is he asking me this? Oh, he must be assuming that this is my dog. That must be what he wants to know about." So this is a kind of navigation of people's expectations in conversation, that people always have to do. That's right-- unless you're Inspector Clouseau. Yeah, OK, good. Let's end a little bit early today then. And have a good weekend. And I will see you next week. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_11_Syntax_Part_1.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: So morphology. Remember morphology? We were talking when we talked about morphology about the fact that a word like "unlockable" is ambiguous. It can mean either it's possible to unlock it, or it is not possible to lock it. So it can be a desirable property of a lock or not. When we talked, I think, about the fact that at least I want to pronounce it a little differently depending on which of those things I say. So The door is unlockable" means it can be unlocked. But the door is broken, "It's unlockable," means it cannot be locked. I have this desire to put an extra oomph on "un-"-- an extra little demi-stress beat on "un-" if I mean the thing on the right. And what we said, when we were talking about this, was we can account for this kind of ambiguity in the following way. We'll say some things which are clearly true. First, "-able" combines with verbs to make adjectives. There are a bunch of verbs that you can make into adjectives this way. So you can take a verb like "sing," or a verb like "understand" and get adjectives like "singable" or "understandable." And to say that something is singable is to say that it's possible to sing it, right? That's what that means. "Understandable" means it's possible to understand it. So there's a "-able" suffix that changes verbs into adjectives. And then, we said, there are two "un-"s. There's an "un-" that combines with verbs and makes verbs. That's an "un-" that means something like undo the effects of, or change something so that it is no longer in the state that it would have been if the verb had applied to it, or something like that. So "untie" means "take something and do things to it such that it is no longer in the state that it is standardly in after you have tied it." Or to put that a little more briefly, "take something that was tied and make it so it is not tied." That's what "untie" means. Similarly for "undo" and a bunch of other "un-"s. Yep. So there's an "un-" that applies to verbs, and has-- it's sometimes called a reversitive meaning. You change it so that it's back from the state it would have been in. And then there's another "un-," let us call it "un-" number two. "Un-" number two combines with adjectives and makes adjectives that mean more or less "not (adjective)." So "unkind," or "unfamiliar," "unfortunate." These all mean not the adjective, whatever it is. And so what we said was-- oh, hey. Raquel? AUDIENCE: So I have a horrible thought, and it's random. NORVIN RICHARDS: Oh, no. AUDIENCE: But when you were saying, put something so that it's no longer tied, I would argue that not only are we saying make it so it's no longer tied, but leave it in a similar state that it was at the beginning chronologically, because you can chop a shoelace in lots of little pieces, and it's no longer tied. [INAUDIBLE] NORVIN RICHARDS: Oh, well that's a nice point. Yeah. So if I take-- let's see. If I untie a shoelace, first of all, it has to start off tied, is that right? So if I have a shoelace which is not tied, I can't untie it. Yeah. But if it's tied and then I take scissors and I cut it into many small pieces, have I untied it? No, surely not. When Alexander cut the Gordian knot, he wasn't untying the knot. He was being more direct than that. Good point. So you have to put it-- so what you just said, I like the way you just said it. You have to put it back in the state that it was in before it was tied. Is that the way to say it? AUDIENCE: Going back and tie it, [? essentially. ?] NORVIN RICHARDS: Yeah. Yeah. So yeah, that's a nice point. Undo. I mean, because what we said-- we talked about this in class, that there are lots of things. That it's hard to do this. This "un-" is kind of picky about what it can combine with. So the "un-" that attaches to adjectives can attach to lots of adjectives, but you can't-- if I take some shoes, I can't unwash shoes, or unwash socks. That doesn't mean "take the socks and make them dirty again." You could imagine that it would, but that's not what it means. And maybe that's related to your observation. Yeah, it's interesting to think about. Joseph, did you have a-- AUDIENCE: Yeah, I was going to-- based on what Raquel said, does the final-- after you "un-" something, is that going to be able to be redone? So if I untie a shoelace by cutting it up, now-- NORVIN RICHARDS: Yeah. It can't be tied again. Well, let's see. If you undo an operation on a computer, does it have to be possible to do the operation again? I don't know, maybe. Yeah, there is a redo, isn't there? Yeah? Yeah, I wonder. Maybe that is what it means. Maybe it means-- yeah. And as you say, this is related to Raquel's point, you have to put it in a state such that the verb could apply to it again, maybe. Maybe. Yeah, so there's a lot. What I actually did on the slide was just to say, "un-" number one combines with verbs to make verbs. And then my mental notes to myself say, vamp something about what it means. You guys are doing some sophisticated thinking about what it means. It's a little complicated, figuring out what it means, as you can see. Nice point. Still, what I've said on the slide, apart from the vamping, I think is true. There's an "-able" that changes verbs into adjectives, and there are "un-"s, one that combines with verbs, and another that combines with adjectives. Does that sound right so far? And what all of this means is the ambiguity of "unlockable," we get to attribute it to basically the fact that there are two "un-"s. So you could have attached the "un-" before you attached the "-able," or you could have attached the "un-" after you attached the "-able." Because the "-able" is going to change a verb into an adjective, and "un-" can combine with either a verb or an adjective. So that was the way we were talking. So the idea was you can start by attaching "un-" on number one, so "lock," giving you this new verb, "unlock," which is related to the meaning of "lock" in mysterious and complicated ways that we've now been talking about. And then you can attach "-able" to that and give you an adjective, "unlockable." And I drew trees like this before and said, yeah, these trees are kind of a representation of the order in which you did things. That's all they're for, is to say you started by putting together those two things at the bottom of the tree, the verb and that prefix, and you created a verb. That's what that prefix does, it takes verbs and returns verbs. And then that verb gets to attach the second thing you do. That verb attaches to the suffix, and now the suffix changes that verb into an adjective. Or you can do things in the other order. You can attach "-able" to "lock," giving you an adjective, and you can take "un-" number two and attach "un-" number two to that adjective, giving you a new adjective meaning "not lockable." And so "unlockable" is ambiguous. And the ambiguity, we said, comes from the fact that, well, there are two "un-"s, which is something we can observe. There's an "un-" that goes on verbs and an "un-" that goes on adjectives, and there's a "-able" that changes verbs to adjectives, right? Yes. And that means that there's an "un-" that can go before the "-able," and there's an "un-" that can go after the "-able," and so we get this ambiguity. And the ambiguity is what we would expect it to be. Yeah? This is all review. Having said all this, I guess what we expect is that you could have two "un-"s, I didn't put this on the slide anywhere, that it would be possible to say it is "un-unlockable," which I think is true, actually. "This door is broken. It is un-unlockable." That would mean it cannot be unlocked. I think that's true. Go ask some people whose minds have not been contaminated by linguistics. Go harass your roommates or whoever. You'll make yourself popular that way. And if you are going to try to learn from me how to make yourself popular, then boy, are you in the wrong class. OK, is this all clear? So the important part of the story is to say, yeah, "unlockable" is a word. It's got three morphemes in it, a prefix, and a root, and a suffix. But it isn't just three morphemes in a row. Those morphemes were assembled in an order. You assembled them pairwise. You first put two of them together, and then you added another one to the result of that first putting together, and that order has consequences for interpretation. So it's not false to say that unlockable consists of three morphemes, a prefix, a stem, and a suffix. But it's not a complete description of what's happened either. It is that, but it is also those trees, or those trees represent something, namely the order in which you did things. Does that make sense? Now what we're going to do now is start talking about syntax, which is the study of how words are assembled to make sentences, words, sometimes things smaller than words, as we'll see. And we're going to see that it's useful to think of sentences as being put together in a bunch of operations more or less the way "unlockable" is-- that we take pairs of words and put them together to form larger objects the way I just did for "unlockable." Just to give you an example of the kind of thing we're going to talk about, I think I talked with you about this on the first day. Just as with "unlockable," it is true, but it is not a complete description to say that is three morphemes, a prefix, and a stem, and a suffix. That's true, but it's not a complete description. A complete description involves those trees or some equivalent explanation of the order in which you did things. Similarly, these two sentences, "John walked up the stairs," and "Mary looked up the reference," it's true, but it is not a complete description to say that those are two sentences that consist of a noun, a verb, a preposition, a determiner, and a noun. Did I say anything that alarmed anybody just now? So those are two sentences that consist of five words. That's true. And we can say things about what kinds of words they are. There are nouns up there, and verbs, and prepositions like "up," and I just called it a determiner, "the." People sometimes call it the definite article. You'll hear me call it the determiner a lot in this class. It's true to say that those two sentences consist of those five words. But just as with the two versions of "unlockable," we can convince ourselves that it's not a complete description, that it's helpful to think about the order in which you assembled these things. So what I told you last time was, effectively what we're going to want to say is, there was an operation that created the substring "up the stairs" in that first sentence. That's what we call a constituent. And there is no similar operation creating a substring "up the reference" in the second one. And again, this is review, but it's review from the first day. What I convinced you, I think, I hope, I tried, was that there are various syntactic phenomena, various things you get to do with sentences that treat "up the stairs" as a single object that syntax gets to manipulate in various ways. We're going be talking about that, about what that means exactly. And the same syntactic operations don't get to treat "up the reference" as a single thing. So we said it's possible to ask questions like, "Up which stairs did John walk?" It's fairly stuffy questions. It's a strange way to ask the question, but you can say it. As opposed to, "Up which reference did Mary look?" which is gibberish. So we're going to draw a distinction. This is maybe the first time that I've shown you a case where syntacticians have to care passionately about the difference between one sentence, which is complete gibberish, and another sentence, which is not great. There's a fair amount of great syntax that's built on those kinds of distinctions. Or similarly, if I say, "John walked up the stairs," and you're surprised for some reason, you can say, "Up the stairs?" That's not a weird thing for you to say. Whereas if I say, "Mary looked up the reference," and you're surprised, no matter how surprised you are, you're not going to say, "Up the reference?" That's a weird response. So the point is, just as with "unlockable," yeah, it's three morphemes, prefix, stem, suffix. But having said that, we haven't said everything. We have to know which parts of "unlockable" are single parts. Is it unattached to "lockable," or is it "unlock" with a "-able" attached to it? Those are different adjectives with different meanings. Similarly here, it's not enough to say, yeah, we've got these five words of these types. We've got to know which of these things go together, which things are parts, single parts. And there is a part, "up the stairs," what we call a constituent, that various kinds of syntactic phenomena care about, like the syntactic phenomenon can I repeat this if I'm astonished? Yeah, that's a test-- kind of test for this property of constituenthood. So there's more to a sentence than its parts. We've got to know, in what order did you put those parts together, just like with "unlockable." Yeah? Makes sense? So we're going to do syntax. We want a theory that's going to divide sentences into three kinds. There are, on the one hand, sentences that you've heard a zillion times before, like "We're going to class." And on the other hand, sentences that you have possibly never heard anyone say, but that are fine. So if I say, "My anteater is hula dancing," you may never in your life have heard anyone say that. Maybe you have. Some of you may have had more exciting lives than I have. But it's an OK sentence. As opposed to, "We're class going to," which I've given a star there. Recall that the star is what syntacticians give to things that are bad. So syntacticians are the opposite of normal people. When we see things we don't like, we give them a gold star. Usually, it's not gold. It's black. I guess that makes a little more sense. This three-way distinction is worth highlighting, because if I had only given you the first sentence and the second sentence, "We're going to class," and "We're class going to," the first sentence is good, and the second-- and the last sentence is bad, some of you might be thinking, well, that first sentence, it's a sentence I've heard lots of times before. Maybe when I was a small child, I heard my parents say that. And I heard them say it and I remember it now. And maybe that's all. Maybe that's what syntax is, it's the ability to remember things you've heard people say. But the existence of the second class of sentences shows that that's hopeless. So you're not just remembering things you've heard people say when you're deciding which things deserve the black star and which things don't. You're not just categorizing sentences into sentences you've heard before and sentences you haven't. You've got this intuition about which sentences are acceptable. And it's not just about which sentences are in your input, which sentences you've heard people say. It's something else. We're going to try to figure out what that is. But I'm giving you these three sentences to slay a hypothesis that you might be entertaining. Stop entertaining that hypothesis. Make it go home. It's not a good hypothesis. It won't do you any good. Is that clear? Are people clear on the hypothesis that I'm attempting to slay? Yeah? AUDIENCE: Maybe to [INAUDIBLE] a modified version, what if it's not particular sentences that we're remembering, but structures of sentences that we've built up over time? NORVIN RICHARDS: Yeah. So if we pursued that-- that's that's much more sophisticated than the hypothesis I was trying to slay. So this is day one of syntax So no, you're raising a good point. What if all you're doing is remembering chunks of sentences? So maybe you've never heard anybody say, "My anteater is hula dancing," but maybe you've heard people say "anteater." Maybe you've heard people say "My anteater," maybe not. But at least you've heard people say "My (noun)," and you've heard people say "anteater," and "is hula dancing." Well, yeah. Maybe you've heard somebody say something like that. So maybe there are some parts of this that you could be acquiring that way. We're going to have to be-- we're going to have to pursue that hypothesis long enough to find out exactly what it says, right because what I just said, which wasn't really the hypothesis, it was an attempt to represent it, we're going to have to figure out how to rule out "We're class going to," because you probably have heard people say "we're," and "class," and you've heard people say "going to." Someone asks, "Where are you going to?" We've heard people say that. And so we're going to have to be careful about that hypothesis to try to figure out how we could use it to rule and-- to draw the distinctions that are on the board. But you're right. So just to summarize this conversation you and I have just had, I said, there's a fairly stupid hypothesis, which says-- and I'm raising it partly because it's been seriously entertained in the literature before, which says, all you're doing is remembering things people have said, and that's what distinguishes grammatical sentences from interpretable sentences. That's false. So we have this distinction between sentences that are grammatical and sentences that are ungrammatical that isn't just a list of all the sentences you've ever heard before. You can take a sentence you've never heard before and accept it. You're raising the point there could be a better version of the stupid hypothesis, one that said, well, maybe there are some parts of the sentence that you've heard before, which is true. We're going to have to be explicit about which subparts count and what exactly we mean when we say that. But you're right, there could be a better version of that hypothesis. Good point. Other questions? Did I successfully answer your question? Yeah. OK, all right. So let's talk about what's wrong with "We're class going to." They have several hypotheses about what's wrong with it. One could be that it doesn't mean anything. It's a thing people sometimes say. When I'm trying to explain to people-- when I'm on an airplane flights and people ask me, what do you do for a living? How I answer depends on whether I feel like talking to the person or not. So if I feel like talking to them, then I will tell them that I work on endangered languages, which is something I do I work with, languages that are down to their last few speakers, and try to work with them. And then sometimes they're interested in that and they talk to me. If I would like to get them to leave me alone so that I can read a book or whatever, I tell them I'm a theoretical syntactician. That usually ends the conversation fairly quickly. But when it doesn't, when they say, "Oh, what's that mean? What do you work on?" Then I will say, well, I'm trying to figure out why some sentences are grammatical and others aren't. And I'll give examples. And sometimes they will say, "Oh, but what does that mean?" "We're class going to." Maybe that's what's wrong with it. It doesn't mean anything. But the problem is we're capable of distinguishing grammaticality, even in sentences that don't mean anything. So this is a famous example of Noam Chomsky's. In fact, I think if you look him up in like Bartlett's Familiar Quotations, you'll find this first sentence, "Colorless green ideas sleep furiously." He actually offered that sentence as a part of a pair of sentences. He wanted people to contrast that sentence with the same sentence backwards: "Furiously sleep ideas green colorless." So consider those two sentences. And the point is neither of them means anything. So the first one doesn't mean anything. And then if you turn your attention to the second one, it also doesn't mean anything. But they have a different status. So the second one doesn't mean anything and it's ungrammatical, whereas the first one, it doesn't mean anything, but you feel as though-- and this is what I'm slowing down right about here where I say it doesn't mean anything. Because the reaction people sometimes have right about here-- does anyone want-- here, sometimes people say, "Oh, but look. Suppose 'colorless' meant 'boring,' and suppose 'green' meant 'environmentalist,' and suppose 'sleep' and 'furiously' meant different-- suppose these words meant something other than what they mean. Then the sentence would mean something!"-- which is true. But it's another way of saying the same point. Yeah, the first sentence, "Colorless green ideas sleep furiously," is meaningless if you don't mess with the meanings of the sentences. But it obeys the rules for how words can be combined. If the words meant something else, the sentence would be fine. We can have English sentences that consist of two adjectives modifying a noun. And then there's a verb, and then there's an adverb. Not that one, but "Big green monsters snore loudly," that would be fine. So when the person on the airplane flight next to me says, "Oh, what's wrong with 'We're class going to' is that it's a meaningless sentence," if I'm really, really desperately trying to end the conversation, I bring out these kinds of pairs. So it's not about meaning. We have this intuition that there are sentences that are OK and sentences that are bad, which is separable from our intuition about what means something and what doesn't. I've just been asserting things about our feelings about these sentences. Do people have this feeling about these sentences? First, that they're meaningless, and second, that the first one is OK and the second one is bad? Yeah, Raquel? AUDIENCE: I can't remember the word for this, but you were saying that certain types of words are categories that you can add more words to, even if you don't know what they like modifiers and things like that. And like, "This house is very [blope?]" That's grammatical, even if you don't know what it means, or you can't make that-- NORVIN RICHARDS: Oh, yes. Yes. We were talking about open class and closed class morphemes. So Jabberwocky is a poem that you can write, changing all the lexical items to nonsense words, but you couldn't do that with functional items. Yeah, that's right. AUDIENCE: So maybe in situations where something doesn't really make any sense, your brain still knows that OK, maybe it almost like that kind of class where you're like, well, I mean, you can fit like an adjective like this-- NORVIN RICHARDS: Yep. AUDIENCE: --even if it means something ridiculous right here, and it sounds OK even if it's [INAUDIBLE].. NORVIN RICHARDS: Yeah. That's a nice way to put it. I guess this is similar to what I was trying to say about a reaction I sometimes get to this sentence, which is, you say "Colorless green ideas sleep furiously" doesn't mean anything. And people will sometimes say, "Well, but if these words meant something else, then it would be OK," which is true. Yeah? AUDIENCE: I wonder if it just has to do with the idea [INAUDIBLE] NORVIN RICHARDS: Yeah. I think I may not have heard all of that, but the idea was the first sentence, it's clear what "colorless" and "green" are trying to do. They're both trying to modify ideas. Is that right? And it's clear what "furiously" is trying to do. It's an adverb and it's trying to modify "sleep." When I say it's meaningless, what I mean is you can't sleep furiously, right? And things can't be both colorless and green. And if they could, well, ideas couldn't have those things. They're abstract, right? That's the sense in which this is a meaningless sentence. I think you're raising the point, which is a good point, yeah, it's meaningless, but the words in it are fitting together the way they should. The adjectives are going where adjectives go. They go before the noun, that's what they're supposed to do. And in the second sentence, they're not doing that. Is that the relation to the point that you're making? Yeah. Yeah? So that's the intuition that we have, that it's possible to have feelings about sentences of the form, well, I don't know what this is supposed to mean, but the parts are all in the right place. We are in the syntax part of this class. It's all about parts being in the right places. We will eventually do the semantics part, which is about meaning. But the point is that it's possible to study these things independently of each other. So completely independently. So when I was showing you "unlockable," what we were really doing was morphology, but we were also talking about meaning. We were interested in the fact that word had two different meanings. We talked about the fact that it meant different things. So we get to use semantics as kind of a probe into what's happening in morphology. We'll do things like that in syntax. But our feelings about whether sentences are grammatical or not, or acceptable or not, are separable from our feelings about what they mean. That's the point, if anything. The reverse, there are sentences that are ungrammatical, but-- that are meaningless, but grammatical, like "Colorless green ideas sleep furiously." On the flip side, there are sentences that are meaningful, but ungrammatical, the sentences where it's very clear what they should mean, but you just can't say it that way. Here's a quadruple of examples that's meant to show you that. So you can say, "Put the sweater on," you can say, "I put on the sweater," you can say, "I put it on," you cannot say, "I put on it." The meaning is not the problem here. It's clear what "I put on it" would mean. But there are facts about how English pronouns work and how English particles work that mean that you don't get to say that. It's not a meaning thing. It's something about syntax. It's something about how these parts get to combine. You want to try to understand that. The only point of these few slides has been it's possible to study syntax independently of meaning, where by independently, I just mean the facts of syntax don't just reduce to facts about meaning. They also don't just reduce to facts about what you've heard before. That's what I've been trying to show you. Here's another thing you might think about what's wrong with "We're class going to." It ends in a preposition. Where any of you taught in school you must not end sentences with prepositions? Some of you were-- AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yeah. Some of you were not. That's kind of interesting. I was. I was beaten by English teachers for ending a sentence-- that's not true. I was not beaten. I was spoken to harshly by English teachers for ending sentences with prepositions. Do you know why we were told not to end sentences with prepositions? AUDIENCE: It's reductive? Because I guess-- because where are we going to could just be a [INAUDIBLE]. NORVIN RICHARDS: So there are some cases like that but look there are also cases like, "Who are you talking to" where-- sorry, where-- who am I apologizing to? The chalk, I guess. "Who are you talking to" where, without the "to," it doesn't have to be about redundancy, I don't think. Is this a sentence that you've ever heard anyone say? Is this OK? Yeah, this is, I think, pretty good English. So when our English teachers were telling us not to end sentences with prepositions, we should have asked them, what are you talking about? Because the fact is that English speakers end sentences with prepositions every day. Do you know why your teacher told you not to do that? Yes? AUDIENCE: Maybe it's because the object isn't clear? NORVIN RICHARDS: Well-- but is it? I mean, it's kind of-- if I ask you, who are you talking to? There's a sense in which the object isn't clear. I'm asking you who the object is, right? So it shouldn't be any harder than, who are you-- AUDIENCE: Who are you talking-- NORVIN RICHARDS: --describing, yeah. Yeah? AUDIENCE: Is it because the "who" is supposed to be the direct object of "you are talking to" this person, so it's like, "to whom are you talking"? NORVIN RICHARDS: So yeah. What your English teacher wanted you to say was, "To whom are you talking?" which many English speakers are capable of saying, possibly because their English teachers frightened them into it. But it's not my go-to way to say this. I don't know about you guys. So here, "who," the question word, is up here at the beginning by itself. Here, "to whom" is part of this phrase that's at the beginning. But question, why does "to" have to come along, according to your English teacher? Yes? AUDIENCE: Is this another example because that's how it's done in Latin? NORVIN RICHARDS: Yes. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yes. Yes. Your English teacher told you to do that because Latin, actually, among many other languages, doesn't allow you to do this. You have to do this. English is quite rare in being able to do this. Most of the languages of the world can't. And some time in the 15th, 16th century, a number of grammarians decided that English would be way cooler if it were more like Latin, and so they began declaring that it was. And that's why your English teacher told you that you can't say this, which is ridiculous. We should be proud of the fact that we can say this, because, as I say, it's rare. Most languages can't do this. We should have this on our flag or something like that. There should be a stranded preposition, a preposition at the end of the sentence. So yeah, no reason English has to be like Latin. Or like I say, a zillion other languages. Most of the languages of Europe, French, or German, or Italian, or whatever, you can't leave prepositions at the ends of sentences. But those teachers don't have to tell their children not to say things like, who are you talking to, because they literally can't, and none of those kids would ever do that. But in English, we can. We should be proud of that. So that's not the problem with "We're class going to." It does end in a preposition, but many perfectly fine English sentences end in prepositions. There's a distinction-- yeah, there's the example "Who are you talking to?" There's a distinction that's worth drawing between what's called prescriptive and descriptive studies of grammar. So what we are doing in this class is trying to figure out what people actually say, what the rules are for putting sentences together in English. There are other kinds of things that people say about how English should be spoken. We're not going to talk about that stuff, except to mock it the way I did just now. So prescriptive grammar is the study of rules that your teachers might have taught you in school about how to speak, some of which, just to stop mocking it for a second, your teachers might have tried to tell you things that would genuinely improve the quality of your writing, like get rid of ambiguities of various kinds. So if you're now mentally composing nasty messages to your English teachers, don't send them. They had your best interests at heart, and they probably taught you a lot of things that were valuable. But they also taught you some things that became popular around the 15th, 16th century because people thought that English would be better if it were more like Latin. And so we're not going to try to improve your writing in this class, except insofar as the writing advisors can do that. We're not going to be talking about prescriptive studies of English grammar. We're not going to talk about how you should speak, or how you should write. We're just going to talk about how you do. So this is going to be a study of descriptive grammar and not prescriptive. Yeah? AUDIENCE: So for that second sentence on the board, "What are you talking about?" NORVIN RICHARDS: Oh, this one? Yeah? AUDIENCE: How would you answer that such that something [? throws ?] the word [? around. ?] Because "About what are you talking?" is-- NORVIN RICHARDS: I can't do that in English, "About what are you talking?" AUDIENCE: That sounds wrong, actually. Whereas "To whom are you talking?" sounds fancy and snobbish, but still right. NORVIN RICHARDS: Yeah, yeah, yeah. No, you're raising an interesting distinction. So in many languages, most languages, including Latin, you have to say, "About what are you talking?" and you have to say, "To whom are you talking?" So these are languages in which you never leave a preposition at the end of the sentence. You always bring it along with the question word to the beginning of the sentence. But you're raising a really interesting point. In English, there's this distinction between the examples where leaving a preposition behind is what you prefer, "Who are you talking to" But you can kind of say, "To whom are you talking?" And others, were "About what are you talking?" Do other people have this intuition, that "About what are you talking?" is worse than "To whom are you talking?" I have that feeling, too, I think. There are other examples. There are examples that are really quite bad. And other examples which get better. So things like "We left despite her warnings." And then consider two kinds of questions you could ask about that. "What did you leave despite?" And "Despite what did you leave?" Is either of those acceptable at all? "What did you leave despite?" and "Despite what did you leave?" Who prefers "What did you leave despite?" Who prefers "Despite what did you leave?" Who would do anything to avoid saying either of these things? Yes. Yeah, yeah, yeah. So here's another example of one sentence that's quite bad and another sentence that's only bad-ish. You might want to try to understand what's going on. So there's a fruitful area of research here. OK, English doesn't-- English is happy to leave prepositions at the ends of sentences. But in which cases is it happy to do the Latin thing? And in some cases, it's happier than others. It's interesting to try to study. Another distinction to make. So we've drawn this distinction now between meaningless on the one hand and ungrammatical on the other. That sentence can be both meaningless and ungrammatical, but it can also be meaningless and grammatical, like "Colorless green ideas sleep furiously." We've drawn that distinction. We've drawn a distinction between prescriptive and descriptive statements. So prescriptive statements are things like "Don't end sentences with prepositions," which are-- how shall I say this?-- false. Not a good description of what English speakers actually do. They are aspirations. We're not going to do that. We're just going to study what English speakers do. So here's another useful distinction. It's sometimes called competence versus performance. Imagine that I am standing up here talking to you. So far, this should be easy to imagine. Except I'm not standing. Imagine that I'm standing up here talking to you and I say, "This is the--" and then I Inhale a fly. So imagine that-- oh, I don't have to wear a mask. I keep forgetting that. Imagine that I take off my mask and I Inhale a fly. So I say, "This is the--" and then I stop. I'm like [IMITATES COUGHING] and I stop right there. And then imagine that this experience is so traumatizing for me and also for the fly, that I just I never complete that sentence. So I say, "This is the--" and then I stop. There are two attitudes that we could have to me having uttered that sentence. That's the sentence I uttered. "This is the cough, ugh, puh." That's something I said. And I'm a native speaker of English. So there are two kinds of things we could say. One would be to say, we're developing a theory of all of the kinds of sentences that Native English speakers can say. And that was one. "This is the cough hack splutter." And so we want a theory that allows that to be a sentence of English. That's one kind of thing we could say. This is one of those moments that happens a lot in classes where the professor says, "Here's one thing we could say," and then describes something completely ridiculous. You could do that. But here's another thing we could say. We could say, no, look, if we're developing a theory of all of the things that native English speakers can say, native speakers of any language at all, but we're going to start with English, what we want is not a theory that covers "This is the cough, hack, splutter." That's not going to be one of the sentences we're going to try to get. We're going to try to get sentences like, "This is the answer" or whatever. That's going to be a sentence, whatever it is I meant to say. And then there are going to be other things. Flies, sudden heart attacks, and then other kinds of things that are maybe less clear to think what to say about them. Flies, sure. Sudden heart attacks, yeah. What about if I forget what I was going to say in the middle of a sentence. I know that's hard to imagine, but imagine that I did something like that. I'm in the middle of a sentence, and I'm talking, and then I just forget where I was going, and-- where was I going with that? Who knows. That could happen. So what we're going to have-- so this is a different approach, and it's the one that you might imagine I'm recommending. What we're going to have is the idea that we're going to develop a theory of what English speakers say, but we're going to imagine the kind of English speaker who never inhales flies, and never forgets what they were going to say, never has a fatal heart attack, and only speaks in completely grammatical sentences. There might not be any speakers like that. If you have ever looked at a transcript of somebody talking, there aren't any sentences in there, unless the person is reading a text, or unless the person is Noam Chomsky, I have to say. It was quite weird reading transcripts of Noam Chomsky talking, or listening to Noam Chomsky talk, because, in fact, he talks in complete sentences, paragraphs. It's kind of astonishing. Normally, if you look at the transcript of somebody talking-- speaking of Noam Chomsky, I've often heard him say this. Journalists know that the best way to make someone look like a complete idiot is to quote them accurately. Just to write down exactly what they said, because they'll say "um" and "uh" and they'll stop, and they'll pause, and they'll change what they said, and they'll inhale flies. Things will happen such that their sentences are not fully grammatical sentences. What journalists, in fact, do is to clean up all that stuff so that people sounded like they were talking in complete sentences. So we're going to develop a theory of what English speakers say, but it's going to be a theory that's divorced from reality to a certain extent. We're going to imagine what people would be like if there were no distractions, and no flies, and no sudden homicides, no falling asleep in the middle of your sentences, all of that stuff. So the distinction here is competence versus performance. We're imagining a speaker who's kind of like a frictionless plane, that there are various kinds of complications, and there's no air resistance or whatever else. Various kinds of complications don't arise. So we're going to be talking about speakers' competence, what they would do if there were no distractions and no problems. There's also performance. That's the study of what people actually do. And we want to study that, sure, but we're going to develop a theory of competence on the theory that it'll be simpler in the hope that by abstracting away from various kinds of complications, we'll get a clearer picture of what's going on. Does that makes sense? That's how we're going to do syntax. Similarly, "It's raining" is a possible sentence of English. "John thinks that it's raining" is a possible sentence of English. "Mary thinks that John thinks that it's raining" is a possible sentence of English. In fact, for any sentence in English, it's always possible to create a longer sentence, so take any sentence, S. Here's a recipe for another sentence of English. You can always say, "She thinks that S," where she maybe refers to different people in every clause. So you can say, "It's raining." "She thinks that it's raining." "She thinks that she thinks that it's raining." "She thinks that she thinks that she thinks that it's raining" can keep going arbitrarily long. There is no bound on the length of English sentences. When we say it that way, you can tell that I am talking about competence because no matter how many recordings of English speakers you go through, you will never find an infinitely long sentence. Nobody actually says these things. But the reason nobody says an infinitely long sentence, the idea is going to be-- That isn't a fact about grammar. It's a fact about life. And we don't care about life in this class. The fact that if I were to start saying, "She thinks that she thinks that she thinks that she thinks that she thinks that she thinks that she thinks that she thinks that it's raining," then eventually people would stop listening to me, or I would run out of breath, or I would need to take a break to eat, or I would die-- there are various reasons that I will eventually stop uttering my infinitely long sentence, but we're not going to have a grammar that says-- we're not going to try to find out what's the longest sentence anybody ever uttered and try to get that fact to be a fact that we want our grammar of English, our theory of the possible sentences of English-- We're not going to try to predict that. What we're going to have is a theory that says English sentences can be arbitrarily long. And then, yeah, eventually people die, and so nobody ever says a sentence that's infinitely long. But that's about death. That's not about-- and we're not going to talk about death in this class, except when we do. Does that make sense? Yeah? So that's another move we're going to make, another instance of us caring about the difference between competence and performance. Nobody ever performs an infinitely long sentence, but we're competent to produce them. So enough talking about what we're going to do. Let's begin doing it. Questions before I begin doing some syntax? Here's the sentence, "I will find the red book." grammatical sentence. Acceptable. It's clear what it means, although we've just said it doesn't matter whether it means anything. I said early on we're going to want to have a way of saying which parts of this sentence were put together as units, like with "unlockable." We wanted to be able to say "unlockable" is ambiguous because it can consist of a unit "unlock" to which you've added "-able," or a unit "lockable" to which you've added "un-." That's what that ambiguity comes from. And we're going to do a similar kind of thing with syntax. We're going to look for these units. And what we're going to find is that there are various phenomena that care about whether something is a unit in that tree, a single blob of structure. So in a sentence like, "I will find the red book" for example, we'll see that syntax treats that string, "The red book" as a unit. There are various phenomena that care about that. So one of them is what's sometimes called topicalization. It's possible to say things like "The red book I will find." For me at least, it's easiest to say things like that if I follow it up with "The blue book, I will leave right where it is." It's OK to say things like "The red book I will find." It's OK to take a substring like that, "the red book" and put it together with another similar substring conjoined with the word "and." so you can say things like, "I will find the red book and the blue pencils." It is OK to use "the red book" as a possible answer to a question. This is like the stuff we were saying before about things you can say if you're astonished. So you can say "the red book" as, basically, a sentence under the right circumstances. For example, if somebody has just asked you what will you find. These are all ways in which "the red book" is treated as a single object that syntax gets to refer to. I hope I was smart enough to contrast it with something else on the next slide. Yes, I was. But let me give you one other test. Yeah, this is a test. I can say things like "what I will find is the red book." So I can rearrange the words of the sentence in a way such that there is a word, "is" before that string "the red book." It puts a special kind of emphasis on "the red book." It's called clefting. Contrast that with-- so this is not just a property of every three-word string in the sentence. So "find the red," for example, is not a constituent. It's not a phrase. It's not something we need to make reference to. So you cannot say things like "find the red I will book." So "the red book, I will find," "the blue book, I will leave where it is," fine. But "find the red I will book, leave the blue I will pencils." No. Can't do this with just any random three-word string. Similarly, there is no question-- there are questions to which the answer is "The red book," like "What will you find?" There are no questions to which the answer is "Find the red," apart from "What are the third, fourth, and fifth words of this sentence?" So put aside mental linguistic games like that. You can't-- and there's no question that will give you that answer. Yeah? AUDIENCE: What color do I need to find? NORVIN RICHARDS: "What color do I need to find?" "Find the red." Really? Oh, I see. You mean "What color book do I need to find?" "Find the red." AUDIENCE: How [? about ?] "Find the red one?" NORVIN RICHARDS: Find the red-- I want to say "Find the red one." Yes. Is there a faster way to convince you of that? Or similarly, if I switch to my other way of eliciting fragments of sentences, if I tell you I will find the red book and you're amazed, you can say, "The red book?" But if I tell you that I will find the red book and you're amazed, you're not going to say, "Find the red?" I think. Do you think that's true? But yeah, I take your point about-- to the extent that you can use "the red" as shorthand for "the red one." AUDIENCE: It's red acting as the noun, not red acting-- NORVIN RICHARDS: Yeah. Yeah. Yeah, no. That's an interesting point. Several people have points about that point. Joseph? AUDIENCE: I think "Find the red" is an acceptable answer to a question that you're finding this-- suppose you have this fictional-- this children's game where there's a bunch of little tiles. You have to find the red-- NORVIN RICHARDS: Yeah yeah. AUDIENCE: --the red ones. NORVIN RICHARDS: Yeah. So this is like your example where we're going to treat red as a noun that you're going to go find. AUDIENCE: I guess this is kind of the same thing-- "land of the free," "home of the brave." NORVIN RICHARDS: Oh. Yeah. So we have some cases where we have things that you certainly-- should be adjectives, that either we're getting to use them as nouns, or we're getting to modify nouns that you can't hear, however we want to talk about that. Yeah, good point. Good point. Yeah? Other points about this? So all this slide is meant to convince you of is that "the red book" and "find the red" don't have the same status. "The red book," we want it to be a substring that has certain privileges, can be used for these various types of phenomena, as opposed to "find the red," which you can't do those things with. So it isn't just these are phenomena that pick out three-words substrings. It's these are phenomena that pick out certain substrings and not others of the sentence, certain strings of words and not others of the sentence, and we're going to try to figure out theories about why those are the special strings. And what we're going to do is we'll say just as with "unlockable" we were taking pairs of things and putting them together to form larger things, larger units. We'll do the same thing here, only with words. We had this operation, we were calling it merge, that takes pairs of things and puts them together into a larger thing. And similarly here, what we're going to do to create a sentence like "I will find the red book," we'll start with just the end of it, "find the red book" is we're going to take pairs of things like "red" and "book" and put them together. And then we'll take that unit that we've created by putting together "red" and "book" and we'll put that together with this word, "the." And then we'll take that thing that we've created by putting together "the red book" and we'll add "find." So just as with "unlockable," we were taking pairs of things and putting them together in pairs to create these larger and larger structures. We're going to do the same thing to create sentences out of words. This way of talking about it has the virtue of giving us a vocabulary for talking about those kinds of observations we were making on the last two slides. So when we say "the red book" is a unit that various things get to apply to, things like what I called topicalization where you take a chunk of the sentence and put it at the beginning, and it has some kind of emphasis. Because this is syntax and not semantics, we won't worry too much about what it means. When we say that's something you can do to the string "the red book," but not, for example, to "find the red," this tree gives us a way of talking about that. There is a unit that we created in the course of putting things together in pairs that is just "the red book." it's the unit that I've circled there. But there is no unit that we've created as we've been putting these pairs together that consists just to "find the red." Do people see that in this tree? So there's a node in the tree, if you want. It's the one that I circled in red that consists just of the words "the red book." But there is no thing that I could circle that would consist just of the words, "find the red." There are other things I could circle. I could have circled the largest thing, but that corresponds to "find the red book." Or I could have circled a thing that consists just of "red" and "book." It would just be "red book." But there's nothing that's just "find the red." And that's what we're going to relate to all those observations we made on those two slides. I've said this now a couple of times, this is meant to remind you of stuff we were doing when we were doing morphology. When we were doing morphology, we were using this operation, we called it merge, that assembled pairs of things and created new things. When we were doing morphology, we were then saying when you put these two things together, you have a new thing, and we're giving that thing a label whose properties are determined by the things that you put together. So when you put together "un-" number one, and "lock," it's part of the specification of un- number one that when you combine it with a verb, what you get is a verb. So the tree on the left there, "lock" is labeled as a verb, and "unlock" is also labeled as a verb. People see that. And then when you combine that verb "unlock" with a "-able," it's a property of "-able" that it merges with verbs and the thing that you create as a result is an adjective. And so the whole thing is "unlockable. This is what we were doing with labels before. We're going to want to do something similar for syntax. We're going to want to label the parts of these trees. What kinds of labels are we going to use? Well, look, I just gave you all these diagnostics to try to convince you that when you say things like, "I will find the red book," "the red book" should be a unit, right, phenomena like topicalization should get to make reference to it. We have to decide what to label that. Notice there are a bunch of things that have all of those properties. So if I said, "I will find red books," well, you'd be able to topicalized "red books." So "I will find red books." Or "red books I will find." "I will find red books," and you're amazed: "Red books?" Similarly, "I will find those red books about linguistics." Yeah, "those red books about linguistics," that's a unit a similar kind. You can say "Those red books about linguistics, I will find." Yes, so that's a unit we want to be able to make reference to. "I will find books" similarly "books I will find" or "I will find books." "Books?" You're amazed. So "books" by itself is apparently a unit of the same kind. So apparently the important part of that, the thing that all of those have in common, is that they contain a noun. So we're going to name that phrase after the fact that it contains a noun. So when we put together "red" and "book," what we get has properties that are determined by the fact that it contains a noun. If there were no noun, it wouldn't have those properties. Similarly, with "the red book." The things that can go in that slot, the things that can go after "find," we just saw in the last slide, there are various kinds of things that can be larger or smaller. What they all contain is a noun. So we're going to name that thing after those kinds of units, after the fact that they contain a noun. We'll give it the label "noun." Kind of like when we added "un-" to "lock" and got "unlock," we said, oh yeah, this is also a verb. It acts like a verb in every other way. You can use it as a verb: "I unlock the door." You can add things to it that can be added to verbs, like "-able." So similarly here, when we're trying to decide what kinds of things can go in this slot, the slot that's right after "find," the thing that they all have in common is that they contain nouns. So we're going to name them after the fact that they contain nouns. We'll call them all "extended nouns." Yeah? Similarly, there is another unit in "I will find the red book," not just "the red book," which we said is this extended noun. There's also another phrase, "find the red book," which is also a unit. And it was the unit that we were constructing in the slides when I was showing you trees for how to construct "find the red book." and it passes these tests for unithood, constituency. You can say, "Find the red book I will," not only if you are Yoda, but also if you say something like, "I said that I would find the red book, and find the red book I will." It's more or less OK sentences of English. Or if I say "I will find the red book" and you are amazed, you can say, "Find the red book?" (No one has ever found the red book. It's been lost for centuries.) And various other things that are on this slide. "Find the red book" is also a constituent, also a unit that we're going to want syntax to be able to make reference to. And just as we said "the red book," the important part of "the red book" is the noun, that's the thing that determines that that phrase is OK in that place. Yeah, there are various phrases that you can put in that place and they all have nouns in them. Similarly, all the things you can put in this place have verbs in them. They don't have to have anything else. You can say things like "I will leave," and then "leave" has all the properties we just ran through. "I said I would leave, and leave I will." "Leave" is a unit of the same kind as "find the red book." Or "I will leave" and you're amazed. You can say, "Leave?" So "leave" is a unit of the same kind as "find the red book." So the important part for this part is the verb, the part that determines that that's the kind of phrase that can go in that position. And so when we're putting together "find" with "the red book" we said "the red book" is kind of an extended noun. It's a large unit whose special property is that it contains a noun. "Find the red book" we're going to give that the label verb, because having a verb is the important part for that. So yeah, so far so good? So yeah, one more. You can say "I will find the book in the garage." "In the garage" is a unit, it's a constituent. It's a phrase, it's the kind of thing syntax gets to care about. And again, if I say, "I will find the book in the garage" and you're amazed, you can say "In the garage?" If I want to, I can topicalize "in the garage." I can say things like, "in the garage I will find the book." It's a prepositional phrase. The kinds of things that can go in a prepositional phrase include prepositions and noun phrases. "The garage" is another noun phrase. There are also prepositional phrases that seem to just contain a preposition, like, "I will look up," where, again, if I say "I will look up" and you're amazed, you can say, "Up?" (Why will you look up?) "I said I would look up, and up I will look"-- Maybe. Yeah? AUDIENCE: How do we know that "up" isn't like an adverb describing "look"? NORVIN RICHARDS: Yeah. Yep. Yep, yep, yep. Yes sir? Adverb. Adverb is a funny word because there are a lot of things that can be used. So if by adverb, we mean thing that modifies the verb, there are a lot of things that can be used that way, including adverbs like "I will leave quickly," where "quickly" is an adverb. But also prepositional phrases like "I will leave in a chariot" where the prepositional phrase is like an adverb. Or maybe even noun phrases like "I will leave the day after tomorrow," where "the day after tomorrow" sure looks like a noun phrase. It's got a noun in it, "day," and then "the" before that, but we know that can go at the beginnings of noun phrases. So there are probably-- so the word "adverb" can be used to cover a bunch of things, including things that we don't have any other word for, like "quickly," which is an adverb, but also things that we would give other labels to like "noun phrase" or "prepositional phrase." And so I think you might be right that this is an adverb in the sense that it modifies the verb. I think I might also be right in the sense that it's a prepositional phrase that's being used as adverbially. Does that make sense? So what I'm trying to do is work out a way for you and I to both be right, which is always my goal in discussions. Yes? AUDIENCE: What about a sentence like "I will wake up"? NORVIN RICHARDS: "I will wake up." AUDIENCE: Does it make sense to say [INAUDIBLE]?? NORVIN RICHARDS: Yeah, that's a nice example. "I will wake up." So probably-- I mean, I just tried to convince everybody that it was OK to say up-- I said, "I would look up and up I will look." But "I said I would wake up and up I will wake," I'm not even going to try. And similarly, I think, if I say "I will wake up" and you're amazed, you're not going to say, "Up?" Yeah. That's a nice example. There are a bunch of things that look like prepositions that combine in an interesting way with verbs in English and a lot of other Germanic languages. And "wake up" is one of those. And this is actually the kind of example that I was exploiting in the first slides I was showing you about syntax, even the ones on the first day that you can say-- so wake up, you can say "I will wake up." You can also say, "I will wake up the cats." It's unwise but it's grammatical. But I think "up the cats" is not a prepositional phrase. Notice, for example, that if I say I will wake up the cats and you're astonished as you might well be, you can't say "Up the cats?" And "I said I would wake up the cats and up the cats I will wake," this is no good. Yes? AUDIENCE: I think that's not because kind of combined with the verb you can say "I will wake up" but you can't say, "I will wake down." NORVIN RICHARDS: Yeah. Yeah, yeah, yeah. Yeah. So I think-- we don't want to think of "up the cats" as a unit. We want to think of "wake up" as a unit that has the cats as an object. So this is the point you were making, and I think that you are also making, that there's something special about that interaction. Yeah. AUDIENCE: Is that an interesting pair, "Are you up for lunch?" or "Are you down for lunch?" They mean the same thing. NORVIN RICHARDS: And so we are learning-- this is why I'm glad I'm not a physicist, yes. So I work in the domain in which up and down can be the same thing. Yeah. If I were a physicist, then if NASA were to hire me, the spacecraft would have all kinds of problems. That's a nice example. Relatedly, "I will wake up the cat" is OK. You can also say, "I will wake the cats up," which is different from "I will walk up the stairs." You cannot say, "I will walk the stairs up," I think. And relatedly-- I exploited this in an earlier slide-- you can say, "I will wake him up." "I will wake them up." But "I will walk them up," no good. In fact, it's the other way around. I think, right? "I will walk them up." What are you going to do with those stairs? "I'm going to walk them up." No. You have to say I will walk up them. You cannot say, "I will wake up them." You have to say, "I will wake them up." So these "up"s are different. So "up the stairs" is a prepositional phrase. "Up the cats" is not a prepositional phrase. We need different structures for these verb phrases, and we will develop them. You had a-- AUDIENCE: Yeah. I was also going to say, but you can say-- if you're taking someone home, you can say, "I will walk you up to your room." NORVIN RICHARDS: Yes. "I will walk you up to your room." You can say, "I will walk the student up to her room." You can't say, "I will walk up the student to her room." Can you? AUDIENCE: But yeah, so it's like the opposite. NORVIN RICHARDS: Wait. Can you say, "I will walk up to the student to her room?" AUDIENCE: No. NORVIN RICHARDS: No. AUDIENCE: That implies you're walking on the student, which-- NORVIN RICHARDS: Yeah, OK. So if the student is lying down, yeah, and I'm using her as a ladder then, yes. But if not-- so I think "I will walk the student up to her room" is different from all of these, actually, kind of interestingly. I think maybe "up" is modifying "to her room." We want there to be a constituent "up to her room." Notice that if I say, "I will walk her up to her room" and you're amazed, you can say, "Up to her room?" which suggests that that's a constituent. Yeah, that's a nice third thing I should talk about when I'm talking about this. Cool. Oh, yeah? AUDIENCE: Yeah, I had a question. So I think there's something different between waking up or walking up. In a sense, you could say, [INAUDIBLE] and that would make sense. So it's kind of like, I don't know, an adjective, in some sense, is describing [? state. ?] But at the same time, it's not an adjective. It's describing a verb. So I was wondering if there's any big distinction between waking up and like walking up [INAUDIBLE].. NORVIN RICHARDS: So that's a very nice example. So in a way, waking the cats up, it's a little bit like painting the cats red. Not in very many ways, but in this way. If you paint the cats red, you paint the cats. And as a result of that, the cats are red. The cats change into cats that were not red before and now they are cats that are red. Similarly, if you wake the cats up, the cats were not up before. But when you're done, the cats are up. And possibly also red. It depends on how you did it, I guess. So I think you're absolutely right that we want "wake the cats up" to have "up." "I will wake up the cats." We want "up" to not be a preposition that's combining with the cats to be a prepositional phrase. It's something else. It's like a predicate of some kind. It's like "red" in "I will paint the cats red." And yeah, as we study these verb phrases further, we'll want to have structures that give them that character. That was a very nice example. Did you have a question a while ago? No. Joseph? AUDIENCE: I think that that does fit. You can also wake the cats. NORVIN RICHARDS: Yes, you can wake the cats. You can also paint the cats. You shouldn't, but you can. Yeah. Yeah. Did you have a question? AUDIENCE: Yeah. So with the example of walking her up to her room, I think you need to say, "I will walk her up." NORVIN RICHARDS: Yes. AUDIENCE: Right? NORVIN RICHARDS: Yep. AUDIENCE: I think because the "up" is modifying "to her room," but I don't think [? so. ?] NORVIN RICHARDS: It doesn't have to, does it? No. I will walk her up. I will walk her up to her room. I will-- but similarly, you can walk the student-- so I keep going to the student from her, because there's a difference between pronouns and non-pronouns. You can wake up the cats or you can wake the cats up. You can wake them up, and you cannot wake up them. So a pronoun has to go before this kind of "up," but the cats can go on either side. So "I will walk her up," you're absolutely right. That's the way to say that. But I think you also have to walk the student up. You can't walk up the student, unless the student is lying down and you're walking on her. But if you mean you're going to walk with the student so that the two of you are up, so we want "walk the student" up to be different from "wake the cats up," and maybe also different from "walk up the stairs." Yeah, it's a third kind of thing, which, as we work further on this, I'm-- in a way, I'm glad that we're running out of time because it means that I have a week to create slides about this. But we're going to want different structures for this. You guys are making excellent points about this. So we're developing probes into the structure of the inside of the verb phrase that mean that we're going to need at least three different kinds of structures for a sequence that looks like verb, them, up, or verb, up, them. So we've got now three kinds of examples to talk about. There's "wake them up," there's "walk her up," and there's "walk up them" as in "I walked up the stairs." And that seems to be three different things that we need three different structures for. But for "walk up them" we want "up" and "them" to combine to be a propositional phrase. But for these other two, we want something else. We're going to want to circle around and try to find out what that other thing is. Yeah. Am I getting at your point? You're absolutely right. Yeah, so we need a way of covering that. Lots of people had hands. Does anybody else-- yeah? AUDIENCE: I was going to suggest something else [INAUDIBLE] Even though you can't say, walk the-- "walk the stairs up to your room." NORVIN RICHARDS: "I walk--" AUDIENCE: You could potentially say, "Climb the ladder to your room?" NORVIN RICHARDS: Yeah. That's because you can climb a ladder. So you can climb a ladder, and you can go up to your room. That's the one where I think we want "up" possibly to modify to "your room." Or maybe not. Maybe this was your point, maybe we want "up" and "to your room" to be separate adverbs. But just like you can walk up to your room, you can climb a ladder up to your room. The ladder isn't-- AUDIENCE: --or the stairs. NORVIN RICHARDS: Yeah. Yeah. And similarly, to the extent that you can say "He's walking the stairs," which I think you can sort of say, it means he's on the stairs, walking. I think that's what you're doing there. I hate to do it, but I want to ruthlessly squelch everything else you guys want to say, because I have a triumphant slide that I want to show you. And then maybe we can unsquelch you and come back. So here's the idea, just so it's clearer for everybody. For the easy case, the one that I want to refocus your attention on, "I will find the book in the garage," what we want to do is construct a structure for this that's sensitive to all of the tests for structure that we've been developing, and it's going to involve putting things together via pairwise merge and creating labels for the things that we create via pairwise merge that are typically labels that come from one of the two things that we've merged. So when we merged "the book" or "the garage," we're going to create something we're going to give the label "noun" to. Or when we merge "in" together with "the garage," we're going to create what I've been calling a prepositional phrase, something that we'll give the label "p" to, saying this has the properties that it has-- it contains a preposition "in." Similarly, we'll combine "find" with "the book" and we'll combine "find the book" with "in the garage," and we'll end up with a structure like that. Is anybody shocked by this slide? You are, yes. What did you want to do? You're gesturing. You want to combine things in a different way? So actually, you may be right. No, here's a better way to say-- we may both be right. So this is an example where this way of merging things creates a well-formed structure. Notice that if I say, "I will find the book in the garage," it's OK. "I said I would find the book in the garage. And in the garage, I will find the book." That's a unit that topicalization gets to make reference to. That's a property of this tree. So our tests get to tease this out now. Now I'm regretting us being almost out of time because there are many things to say. What we're going to see next is that when we construct syntactic trees for strings of words, it's often the case that we get ambiguities like the "unlockable" ambiguity, places where there is more than one way to combine things. And I think you may be looking at this slide and thinking, oh, but there's another way to combine these words, and you're absolutely right. There is. And what we'll do is develop tests that allow us to see which way we've combined words in different ways. And we'll find cases like "unlockable" where, depending on in what order you combine things, you get different meanings, and our tests will combine with that. Yep? All right. Thanks, everybody. We will do this again on Tuesday. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_4_Morphology_Part_3.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: So first some review. Last time, we were doing "unlockable," and I was trying to convince you that it's useful to think of words, at least some words, as consisting of multiple parts. We're calling these parts morphemes. And there's this process of combining morphemes, we called it merge, where you take two things, two morphemes-- not necessarily two morphemes, two things, and you put them together and you make a new thing. So you can take a morpheme like "un-" and a morpheme like "lock," and you can put them together to make a verb "unlock." And so I just corrected myself. Sometimes the two things you take are morphemes, but sometimes they are the results of previous instances of merge. So you can also take this thing "unlock" that you've made as a result of merging "un-" and "lock," and merge that with "-able" to get you "unlockable." That procedure that I just ran through is what gives you the tree on the left. So the trees there are meant to be representations of the order in which you did things. So the tree on the left is a representation of two merge operations. First, merging "un-" with "lock," and then merging "-able" with "unlock," giving you an adjective that means a functioning lock, able to be unlocked. This is all stuff we went through last time. And this is why the word "unlockable" is ambiguous because, well, there are two ways that you can assemble it, and the two ways give you different interpretations. Are there any questions about any of that? This is all stuff we did last time. OK. Yeah, OK. All right. So that's what we were doing with "unlockable." And then we also said, and we talked about this a little bit, there need to be some statements about what's called allomorphy. So sometimes you don't just peacefully put morphemes next to each other, they change as a result of being next to another morpheme. The result is what's called allomorphy. So a morpheme can have allomorphs, different forms that it has, depending on its environment. Sometimes our statements of allomorphy will have to be very specific to the morphemes in question. So we'll say that when you merge "go," together with past tense, that the result is "went." And that's not the result of a general process, that's something you just have to look up when you look up the word "go" in English. Or that when you add "-ity" to something ending in "-ic," like "electric," that the "k" sound at the end of "-ic" will be pronounced like an "s." You get "electricity" and not "electric-ity." So sometimes these are statements about particular morphemes and what they do when they combine. Other times, we'll be able to do allomorphy via more general rules. So we did spend some time talking about Polish where we convinced ourselves that it's useful to think that Polish has a general rule-- which we'll circle back and talk about more later on-- a general rule that says that if you have a "g" at the end of a word, it becomes a "k." So in Polish, there are various words that when you pronounce them, they seem to end in "k," and then we saw that if you add the suffix so that the "k" is no longer at the end of a word, sometimes it really was a "k," in which case it stays a "k." But sometimes it was a "g" concealing itself as a "k," and it reveals its true self as a "g" when you add the suffix. Yep, this is a crash course in Polish. Now you know as much about Polish as I do, which is not much. Any-- this is all review. Does this all make sense? No questions? All right. So one of the things that we did several times-- in fact, I casually did it on the last slide-- was to use words like "noun" and "adjective." And I thought we should just take a second to talk about what we mean when we say words like that. I bet that a lot of you were taught what nouns and adjectives are. But it's possible that some of you were not, and I know that some of you are not native speakers of English, and this might not be high on my list of words to learn, if I were learning another language. Wow, my power cord just turned into a pretzel. This is pretty impressive. I didn't know they did that. So I wanted to just talk briefly about that before we go any further into morphology. What's a noun? You guys, maybe some of you were taught what a noun is. Yes. An object or a thing? An object or a thing? Yeah. Anybody else taught things like that about what nouns are? Yeah. Yes. A signifier of a particular matter. A signifier of a particular matter. Huh, I like that. That's a very classy definition. Yes? AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yeah. So I was taught that, a person-- actually, I was taught a person, place, or a thing, yeah. And then later, I learned that kids today were learning that it was also idea. I guess on the idea that ideas are not things, which, I guess, they're not. So these are all definitions of "noun" that make reference to the kinds of things that nouns mean, right, so that they refer to people, or ideas, or things, or-- yeah, this is what nouns are. And that's fine if that works for you. That might be the best way for you to think about nouns. Here's another way of thinking about it, which you can use if you want to if you're ever confused about whether something is a noun or not, if you find yourself as you go further in linguistics, every so often, you will-- yeah. Part of the point of higher education is to make you, like, doubt yourself and be unhappy, things that you were sure that you understood, all of a sudden, you're like, wait, do I really deeply understand that? And it's meant to undermine your emotional security, basically, that's what education is for. So here's another way of thinking about what nouns are. If you're making a sentence, let's say, putting a sentence together is like putting together any other complicated thing with lots of parts. Take your favorite example of that, a jigsaw puzzle, or a model airplane, or whatever, or IKEA furniture, if you've ever assembled that. If you're ever assembling things like that, they have different parts with different shapes that go in different places, right? If you're making a model airplane, there are the wheels, and the wheels go, well, where the wheels go. Or if you're making an IKEA chair, there are various boards, but then there are also screws and-- right? There are things that you're supposed to put in particular places. And you could name those things by their shapes, right? So you could talk about the wheel on an airplane, or the edge piece on a jigsaw puzzle. And you can also kind of name them by their functions. If you weren't sure what an edge piece was, well, it would be the piece that goes on the edge of the jigsaw puzzle, the one that goes on the outside. And I guess kind of similarly, "noun"-- when we say that something is a noun, what we mean is that it's something that goes where nouns go, kind of like an edge piece. It's a piece that goes on the edge. So if you're putting a sentence together, there are parts of the sentence where nouns belong. And noun is just a name for the class of things that can go in those places. So, I mean, for example, if you can grammatically finish a sentence with a single word, a sentence like, we are talking about (blank), or we are talking about "the" (blank), then it's a noun. And I'm saying this partly as a challenge, try to show me that I'm wrong. Yes. AUDIENCE: [INAUDIBLE]. NORVIN RICHARDS: You don't, you're right. So we are talking about "him." So "him" gets to count as a noun. And we would want pronouns to be a particular subspecies of noun. That might not be such a bad result. Yes. AUDIENCE: Wouldn't names also be able to fit in that? NORVIN RICHARDS: Yeah. So we're talking about Mary. "Mary" would be a noun. And that's probably right, "Mary" should probably be a noun. Yes. AUDIENCE: In a sentence we are talking about running, there's a word for that, isn't there? NORVIN RICHARDS: It's a gerund, yeah. So this is one of the places where you're supposed to be filled with emotional angst. Wait, "running," it's got "run" in it. That looks like a verb. But it's a verb that's been converted into a noun so that we can put it in this sentence. That's what a gerund is. It's a name for a verb that's been turned into a noun. Yeah, good point. Yeah. OK. So I invite you to try to come up with sentences like this for your favorite parts of speech. Here's another one. If you can grammatically finish a sentence like, "I consider her (blank)," and the result is a sentence, that means, "I think that she is (blank)." So I consider her tall, or brilliant, or whatever, then that's an adjective. The thing that's in the blank. Sometimes the sentences will not make a whole lot of sense. So "I consider her autumnal"-- It's not clear what that means. But if you compare it with "I consider her run," yeah. So "I consider her run" just makes no sense at all. Whereas "I consider her autumnal." I sound deeply poetic, I guess. And you wonder what I mean by that. Yeah. But-- so "autumnal" is an adjective, "run" is not an adjective. And I have this bit about that it has to mean, "I think she is (blank)," because you can say things like, "I consider her often." That means "I think about her often." "Often" is not an adjective, because if I say I consider her often, I don't mean I think that she is often. Yeah. That's not what that means. So "often" is not an adjective. It's an adverb, actually, and we'll talk about that. I'm not going to make you go through this for all the parts of speech, but this is an interesting exercise. Yes. AUDIENCE: Is this the answer to our problem? NORVIN RICHARDS: Uh-huh. AUDIENCE: Yes. It's also [INAUDIBLE],, I think it's coming [INAUDIBLE].. NORVIN RICHARDS: I consider her coming. I'm full of doubt and angst. So there is a parse of that, I consider her coming, which means, I am thinking about the fact that she is coming. It can mean that, I think. I consider her-- can it mean I consid-- I believe that she is coming? I know there are other English speakers in this room. What do you guys think? Yes. AUDIENCE: I know there's like that one part of the game not really ever [INAUDIBLE] NORVIN RICHARDS: Yeah. AUDIENCE: [INAUDIBLE]. NORVIN RICHARDS: Oh, yeah. Yeah. So-- but I think-- I'm sorry, what's your name? Is it Ivana? No. Sorry, what is your-- what is your-- you were asking about "I consider her coming" just a second ago. Yeah. AUDIENCE: Oh, Vlada. NORVIN RICHARDS: Sorry, Vlada, I got you wrong, sorry. Vlada is asking about whether it's possible to have a form like "coming" as an adjective here. Forms like "coming" can be adjectives. So you can say things like "the coming century," right, where "coming" is being used as an adjective. So I said a second ago, "running" can be a noun, right? We're talking about "running." This is one of many tricky things about English. Forms of verbs that end in "-ing" can be nouns, and then we call them gerunds. They can also be participles, and then we call them adjectives. Sorry, they can also be adjectives, and then we call them participles. Let me do that again. "-ing," if you add it to a noun, there's an "-ing" that you add to a noun that makes it into a-- sorry. There's an "-ing" that you add to a verb that makes it into a noun. That's when it's a gerund, like in "We're talking about running." There's an "-ing" that you add to a verb that makes it into an adjective, and that's a present participle. That's the kind of thing Vlada is asking about. So "the coming century," or "a running man," those 'running"s are-- those "-ing" verbs are adjectives, they're participles. Sort of like, if we go back to "un-" we convinced ourselves that "un-" has a couple of different forms, that it can attach either to an adjective, or to a verb, right, with slightly different meanings. Similarly, "-ing" has several different guises. It can be a couple of different things. English really needs more morphemes. We have several morphemes that are pronounced the same way. Not every language does that. There are plenty of languages that have just different forms for gerunds and participles. OK. All right. So this has just been an exercise in trying to convince ourselves that we know what we're talking about when we're talking about things like nouns and adjectives. And like I say, I won't make you do this for every part of speech. But if you want to annoy your friends back in the dorm, trying to come up with sentences like this might be an interesting exercise. Yeah. Interesting for me, maybe not for them. OK. Questions about any of that? This has just been so that if anybody heard all this talk about nouns and adjectives and was wondering what the heck are we talking about exactly, that is what we are talking about exactly. So we've been talking now about morphology for a little while. And we have mostly been talking about language specific properties. So we've been saying a morpheme with a given meaning is pronounced differently in different languages, so we started with cats. We convinced ourselves that if we wanted to look for things that were universal about language, figuring out that the word for cat is "cat," that's clearly not universal. Every language has a different word for cat. So morphemes with a different meaning can be pronounced differently in different languages. Morphemes can be various types. They can be prefixes, or suffixes, or infixes, or other things. So those here are three verbs in English, Lardil, and Tagalog, that all mean "danced." They all have-- consist of a verb, along with another morpheme that indicates past tense. That morpheme is a suffix in English, but it's a prefix in Lardil, and it's an infix in Tagalog. And morphemes can be bound or free. So here are some new examples of that. This is, again, a point of cross-linguistic variation. So in English, a phrase like "in my hand" consists of three words, three free morphemes. But in Turkish, it consists of a root plus two bound morphemes. So there's the word for hand, to which you are adding suffixes that mean things like "my" and "in." Yeah. Anybody here speak Turkish? Cool. Is this true? Am I lying? OK, good. Similarly, in English, a sentence like "I bought a bed" consists of four free standing words. In Mohawk, it consists of one word. So there's a verb "buy" that is combining with various things, including a morpheme that means "bed." So the object becomes part of this long verb. Anybody here speak Mohawk? OK, good. So please just believe me, yeah. This is how Mohawk works. This is a process called noun incorporation. It's not obligatory in Mohawk, but it's an option that you have. So languages can vary with respect to how their morphemes are pronounced, whether they're prefixes or suffixes, or other things, and whether they are bound or free. Languages are sometimes informally classified by how likely their morphemes are to be bound. And when I say that they're informally classified in this way, what I mean is, it's probably hopeless to try to tightly define what this means. But this is a quick and dirty classification that you will hear people use. There are languages out there-- Mandarin is the poster child for this-- that just are not into bound morphemes. So all of the morphemes are free. They don't have to be attached to anything. So in Chinese, "he ate the meal" consists of four free standing words, doesn't have prefixes or suffixes, mostly. There are a few things in Mandarin that people argue about whether they might be suffixes or not. So like you can make pronouns plural by adding a "men" after them. So "wo" is "I" and "women" is "we." So you add the "men" and you get a plural version. It's a candidate for a bound morpheme, but it's certainly a language that doesn't have a lot of bound morphemes. Opposite extreme are languages like Mohawk or Wampanoag. This is the language that is spoken by the traditional owners of a lot of Eastern Massachusetts, including, arguably, the place where we're standing right now. These languages are called polysynthetic. They are really, really into bound morphemes. So things that would not be bound morphemes in English are bound morphemes. This is an example from a letter that was written in the 1600s by a missionary to the Wampanoag on Martha's Vineyard, a guy named Experience Mayhew, because they knew how to give names in the 16th-- 1600s. He was asked, apparently, by the person he was writing the letter to to give him the longest word that he could in Wampanoag. And this is what he came up with. It means "our very skillful mirror makers." And as you can see, it consists of lots and lots of morphemes all piled on top of each other. He kind of cheated by putting "mirror" in the middle of that. So "mirror" itself is a morphologically complex word in Wampanoag. It means "device for looking at your reflection." Yeah, so look, "reflection device," [NON-ENGLISH],, is the word by itself for "mirror." So he sort of gave himself a head start by putting that in the middle of the word. Yeah. AUDIENCE: Is there any sort of connection between, I don't know, geography of where the language is spoken versus whether it's the language tends to be isolating? NORVIN RICHARDS: Which of these things it is? So the question was whether there's any connection between geography and whether a language is isolating or polysynthetic. These things kind of run in families, in language families. So there are a bunch of isolating Polynesian languages, for example. There are a bunch of polysynthetic. Wampanoag is Algonquian. Mohawk is Iroquoian. Those language families tend to be polysynthetic. And so because language families are confined to particular places, there's some kind of connection to where you are. I think that might be as far as it goes, actually. Yes. AUDIENCE: Does "polysynthetic" refer to the same group of languages as "agglutinative"? NORVIN RICHARDS: No. "Agglutinative" means something slightly different. We'll get to "agglutinative" in just a second. Yeah, that's a very good question. Other questions you want to ask? OK. I think we have "agglutinative" coming up. Yeah. So this is another, again, informal way of classifying languages. If we were having tests in this class, I would never test you on the meanings of these terms because they're fairly imprecise. But they're useful for giving people a basic idea of what you're looking at. Agglutinative language is-- this is another way people sometimes compare languages. Agglutinative languages are languages in which there are morphemes-- they're typically a bunch of morphemes that are bound, and they are easy to separate from each other. So Turkish is the poster child for an agglutinative language. You've got a verb, and then you have a bunch of suffixes on the verb. And making the cut between one suffix and the next is pretty straightforward. There aren't a lot of fancy morphological changes that happen as a result of these morphemes coming in contact with each other. And each of the morphemes has a pretty simple meaning. They mean things like, each other, or cause, or past. Yeah. Yes. AUDIENCE: So allomorphy wouldn't be the present [INAUDIBLE].. NORVIN RICHARDS: So, not in-- yeah, not the kind of allomorphy that causes you to wonder where one morpheme starts and the next begins. So there are languages-- the polysyn-- so there are polysynthetic languages. There are languages that have lots and lots of morphology on their verbs. Navajo is a very good example. We'll see some Navajo examples later, in which there are many morphemes but they tend to squash into each other. And you have to do a lot of analysis to figure out where one starts and the next begins-- where one stops and the next begins. Did you have a question? AUDIENCE: No. NORVIN RICHARDS: Yeah, OK. Further questions? OK. So agglutinative and sort of the opposite of agglutinative, people talk about "fusional," or "inflectional" languages. These are languages that have morphemes, and the morphemes squash a bunch of bits of meaning together in one small space. So these Russian nouns, for example, end in vowels, which indicates the grammatical gender of the noun, and also its number, and also its case. And you aren't going to be able to divide that vowel into parts, right? So the "oo" at the end of [NON-ENGLISH] means feminine and singular and accusative. And the "uh" at the end of [NON-ENGLISH] means feminine and plural and accusative. And it's hopeless to try to find a part of that suffix that's the part that means singular or plural, right? It's just all in one big spot. Yeah. AUDIENCE: Is this the case with endings in Spanish? NORVIN RICHARDS: Say it, again. AUDIENCE: Is this the case with endings in Spanish? NORVIN RICHARDS: Oh, I see. You're talking about the verb endings in Spanish? AUDIENCE: More like [INAUDIBLE]. NORVIN RICHARDS: Oh, I see. So this is why, I think I said-- so she's asking, is this true for Spanish nouns which end in endings like "-os" and "-as," right, which indicate both gender and number sometimes. And in those particular cases, you could try to convince yourself that the "oh" indicates masculine, and the S indicates plural, right? In lots of cases, this is why I said we're never going to test you on this kind of thing because this classification is really a classification of how easy is it to make divisions between morphemes and make breaks in places? And the answer is, sometimes very easy-- that's Turkish. Sometimes impossible, that's Russian. And Spanish is somewhere in the middle. It's more maybe for that particular case, it's more on the agglutinative side. Maybe there's a break that you can make. But, yeah, so asking questions like, how easy is it? This is no way to do science. Yeah. Other questions? Yes. AUDIENCE: [INAUDIBLE]. NORVIN RICHARDS: Oh, animate. Where? Oh, here. It stands for the fact that Russian masculine accusatives take "ah" only if they refer to things that are alive. Right. So, yeah, that's what it means. Yeah. So there's syncretism in Russian between the accusative and the genitive, I guess it is, just for animate nouns and not for inanimate masculines. Yeah. Any questions about this? OK. All right. OK. So I'm talking about lots of things that are language specific. And I just wanted to-- this is going to be a topic of conversation throughout the class. When we are looking at things, how much time are we spending looking at ways in which languages are different, and how much time are we spending trying to figure out the basic rules, the things that all languages have in common? Both of these are important things for linguists to do is try to understand both the variation and the core of common thread that all-- connects all these languages together. I just wanted to try to convince you that even in morphology, which is what we're starting with, there are some things that maybe are reliable cross-linguistically. So here's a Georgian verb and a Turkish verb. I don't know either of these languages so I had to pick examples that had a different verb at the bottom. So in Georgian, you've got "paint." And in Turkish, you've got "open." But otherwise, these verbs are the same. So the Georgian one means, "I will have him paint it." And the Turkish one means, "I had him open it." OK. They aren't the same. They're different tenses. But they're both about causing someone to do something to something. There, they have that in common. And they have some other things. And they're different in other ways, too. So the Georgian verb is preceded by several prefixes. The Turkish verb is followed by several suffixes. But here's what they finally have in common. The red affix is merged after the blue affix. So the affix that means "cause" is the first thing that's merged with the verb in both of these languages. And the affix that means first person singular, "I," as the subject, is merged later in both of these. Now, in Georgian, they're both prefixes. In Turkish, they're both suffixes. They're not pronounced the same way, right? "Cause" in Georgian is "a." "Cause" in Turkish is "tir." But they do have that property in common, that the order in which you combine these morphemes is the same for these two otherwise fairly different languages. And that's a pretty reliable thing, so what's sometimes called derivation morphology, which includes things like all the morphology we were talking about, that converts things from nouns into verbs or from verbs into adjectives or whatever all else, and also things like causative, that change the number of participants in a verb, so by adding someone who is causing a verb to happen. That type of morphology tends to be merged earlier than what's sometimes called inflectional morphology, the kind of morphology that indicates things like who's doing what to whom and when. That's a moderately reliable fact. Or similarly, there are various places where a pair of expressions can differ in all kinds of ways, including whether you're looking at prefixes or suffixes. So here's Swahili for "you hit" and German for "you put." And abstracting away from the fact that one of them is hitting and the other is putting, in Swahili, those are prefixes, in German, they're suffixes. But the order in which everything is merged is the same. In English and Turkish, "in my hands," and the Turkish version of that, "ellerimde," the Turkish morphemes are bound, the English morphemes are free. But the morphemes are being merged in the same order. So there's some hope for the idea that there is some core that these languages all share that we want to try to understand what is it that determines the order in which you do your merge operations. If you thought I was going to answer that question for you today, then I'm about to disappoint you. But that's a question that we're going to want to keep in mind going forward. Yeah? AUDIENCE: Does this have anything to do with the fact that, at least in English, there's like a certain way that you can say-- like, if you have a list of adjectives describing a noun, you say them in a certain way. So you say like "the big bad dog," but not the "bad big dog?" NORVIN RICHARDS: Bad big dog. Yeah, so that's a very nice point. When you have a bunch of adjectives in English, they tend to go in a particular order. That's right. And actually, there's a very articulated order. It's got like five slots for the adjectives and the order that they go in. The other cool thing about it is that among languages that put their adjectives before the noun, if they have any rules for the order of the adjective, that's the rule. So that's the only kind of rule that there can be. For languages that put their adjectives after the noun, if they have any rules for the order, there are two kinds of rules. So if you number the adjectives that go before the noun in English, if you say they go in the order 1, 2, 3, 4, 5, if you're a language that puts your adjectives after your nouns, there are only two kinds of languages like that. There are languages in which the order is 1, 2, 3, 4, 5, so you get noun, 1, 2, 3, 4, 5. So it'd be like you say "house big black," and you can't say "house black big." You say, "A little green Japanese carving knife," right? And in these languages, you would say "A knife little green Japanese carving." So you have the adjectives in the same order they'd be in English, but after the noun. That's one kind of adjective-after-the-noun language. And the other kind is just the one that has the mirror image of the adjectives. So in the languages of the world, there are, I think, three kinds, which is kind of interesting. Yes? AUDIENCE: Why is it sort of right to say "big black house" but not "black big house?" NORVIN RICHARDS: Hey, good question, yes. Yeah. There are linguists working on that, yes. And that's actually basically her question, your question, from just a second ago, right? So there are conditions on the ordering of the adjectives. And yes, we want to try to understand what those are. All of you are eventually going to want to write a research proposal in which you try to tell me that you're going to figure something out. I can give you some things to read that are attempting to figure out the answer to that question. But yeah, that's a research topic. Other questions? Cool, all right. So yeah, this is just me saying again what I said. So what those trees have in common is that if A is higher than B in one tree, the same A is higher than the same B in the corresponding tree, where by "higher," we mean higher in the tree, added later in the series of merge operations. That's what we mean. Lots of questions, like the question that both of you asked-- what's going on with order of adjectives, or how come these things have to be merged in these orders in all of the trees that I'm showing. And I'm not offering to answer those questions for you today. These are some of the central questions of syntax, and we'll try to understand them eventually. OK, yeah? OK, so we started off by saying there are at least two imaginable kinds of lexicons if we're going to make a list of everything you know about your language, one that has in it entries like "teach," "teacher," "teachers," "teaching," and "mine," "miner," "miners," "mining," and so on, and another one that instead has-- so that first lexicon consists entirely of words. If you look at an actual dictionary, that's the kind of lexicon you're probably used to seeing. But we've been fooling with the idea that your mental lexicon is not organized that way. What your mental lexicon has in it is morphemes. So it has "teach," and "mine," and suffixes like "-er" and "-ing," and rules for how morphemes can be combined with each other, and what the result is, and what happens when morphemes combine with each other. That's the kind of model of how language is put together that we've been playing around with. Yeah? OK. All right. Yes, so just to give these theories names, nice nonjudgmental names, we're going to call them the wrong theory and the right theory. And evidence that the right theory is right, that we played around with-- for one thing, I asked you to take pity on the plight of the Nimborans. These are these people in Papua New Guinea whose verbs each have 27,000 forms. If we make their lexicons have every verb form in it, their lexicons are going to be very large. They will have no time to do anything other than sit around and remember verbs, whereas if we're willing to allow them to break words into morphemes and put those in their lexicons, then they will have lexicons of a reasonable size. And we saw lots of evidence that people manipulate morphemes, including the fact that you can apply morphemes to new words. So if I tell you I've invented a new thing, it's called a "wug," and then I show you another one, then you will call the two things "wugs." This is a classic experiment by Jean Berko Gleason, which she did this with small children and they did what you would expect them to do-- and so on. Yeah? Good? People are willing to believe in morphemes? Excellent. OK. I want to spend some time, then. The problem set that is now finally available to you has on it a morphology problem. So it gives you a bunch of data from a language. The language is Inupiaq, which is a language spoken in the Arctic, in Alaska, in the coast of Alaska. And it asks you to go through and try to find all the morphemes and figure out rules for how they combine. We've already done a little bit of that kind of thing in this class, but I wanted to do another problem like this. And this problem is going to introduce some new issues and things for you to think about as you're doing morphology. So I want to go through these. These data are from a language called Lardil, which I think I've mentioned before. It's a language spoken in northern Australia which I've done in field work on. It's a language that has inflection on its nouns. Its nouns get inflected for case. So they have suffixes indicating case. And if you're not sure what I meant by that, here's what I mean. Nouns have little markers on them telling basically whether they are the subject or the object of a verb. So if you're the subject of a verb, you're in the nominative case. And if you're the object, then you're in the accusative case. If you study-- English does not have so much of this, except on our pronouns. If you study lots of other languages, if you try to learn Russian, or Japanese, or Latin, or many other languages, you've had to learn about case. Yeah? So Lardil has case. Here are a bunch of nouns in their nominative and accusative forms. So you've got "mela," which means "seawater." It also means "beer," I guess on the theory that they're both foamy things that make you sick if you drink too much of it. "Mela," "barnga," "thungal," "ketharr," and "miyar," those are the Lardil nouns in their nominative forms. And then you've got their accusative forms there, which I won't read to you. What's the accusative morpheme? AUDIENCE: "-n." NORVIN RICHARDS: "-n." Joseph? "-n," yeah. Oh dear, I'm going to have to do this writing-everything-twice thing again. I really need to learn how to manipulate the screens better. So the accusative suffix is "-n." Say it again, the accusative suffix is "-n." Does anybody find any examples that are a problem for that? Yes? AUDIENCE: The ones that have "-in." NORVIN RICHARDS: Yeah, so some of them have "-in," right? Like "tree," and "river," and "spear," I think. What determines whether the suffix is "-n" or "-in"? Yes? AUDIENCE: The last letter. NORVIN RICHARDS: The last letter of the nominative, yep. AUDIENCE: "a" or not "a." NORVIN RICHARDS: Can you be more specific? What determines whether it's just an N or an I-N? AUDIENCE: If the noun ends in "a," then it ends with "-n." If it ends in not "a," then it's "-in." NORVIN RICHARDS: That's true. All of these examples, it ends in an "a." AUDIENCE: Are they vowel versus consonant? NORVIN RICHARDS: Yeah, so we might bravely say that it's about vowels versus consonants, and we'd be right. But you're absolutely right. I haven't shown you that. So the fact is, after a consonant, you get "-in," and after a vowel, you get "n." I'm going to say that again over here. After a consonant, you get-- sorry, after a vowel, you get "-n." And after a consonant, you get -in." OK? Good. So far, so good. Here are some more nouns. So the nouns appear, the nouns you saw before. There are some more nouns. "Rain" is when "wunda," "tip" is "belda," and "curve" is "dalda." And the accusatives are "wunin," "belin," and "dalin." That's unexpected, right, given our theory so far? What do you think is going on? Yes? AUDIENCE: If you remove the "-da" at the end, [INAUDIBLE].. NORVIN RICHARDS: So it's as though you're removing the "-da," right, at the end, yeah? Yes? AUDIENCE: What you think is-- I think it's more than just "-da," because you have "wunda" opposite to "katha." NORVIN RICHARDS: Yeah, we do. We have a minimal pair. It was sneaky of me to put that in there. So there's a "wunda" that means a type of stingray. The accusative of that is "wundan." And then there's another one down here that means rain, and the accusative of that is "wunin." So here I am asking you, how do we get from the nominative to the accusative? Yeah? AUDIENCE: Is "-da" a morpheme that turns everything else into the nominative? NORVIN RICHARDS: It would be nice, but no. Well, actually that's an interesting way to say it. There's a way to say what you just said that would be true. So maybe I shouldn't just say no. Yes? AUDIENCE: Is the verb actually something else, and they're both being modified to form the causative adjective? NORVIN RICHARDS: So I think you and you-- actually, I'm sorry. I shouldn't have spoken so quickly-- are on the right track. Look, here I am saying, how can we get from the nominative to the accusative, right? And the answer is despair, right? That's what that minimal pair is meant to do. We had the same thing in Polish, right? Where the word for "lye" and the word for something else were the same in the singular but different in the plural. So we convinced ourselves, all right, we can't predict the plural from the singular. But we can predict the singular from the plural. That's what we decided, right? So in Polish, we said some of these nouns underlyingly end in "g," some of them underlyingly end in "k." And there's a general rule that "g" becomes "k" at the end of a word. Can we do something similar here? Yes? AUDIENCE: Also, how do we know that "wunda" is "wunda," the stingray species? Is there some kind of exception to the rule? NORVIN RICHARDS: Could be that, could be that. Two things, it isn't. It's part of a general thing. But also, wait. You're absolutely right. Whenever we're looking at a data set, we could say to ourselves, well, you know what, maybe this is just an exception. Maybe I shouldn't try to account for this. But in this class, you should trust me not to do that to you. So I'm going to give you data sets in which you will be able to come up with rules that account for things. And in life, that shouldn't be your first move, probably, because-- I don't want to pick on you. It's a move you hear people make a lot. Hey, maybe this is an exception. But making that your first move is kind of a recipe for failing to make exciting discoveries, right? So if you're willing to exclude certain data, I'm just not going to account for that. Sometimes, that's the way to actually account for everything, is by being willing to recognize exceptions. But other times, what it does is stop you from realizing, no, the exceptions have a general character. We can understand what they all are. So it's worth like-- maybe that can be part of a strategy, corralling all the exceptions into one place and then looking at them all and trying to understand what's going on. In this particular case, the things that look exceptional right now are "rain," "tip," and "curve." We had this beautiful rule on the basis of the words that were up at the top. And now, these new data are breaking it, right? We want the accusative of "belda" to be "beldan." The fact that it's "belin" is weird. Is there a way to do the trick that we did for Polish, which was to say the plural is the useful-- is the thing that we can use to predict the singular, right? So we'll start with the plural and then we'll figure out how to make the singular. Can we play that game here? Yeah. AUDIENCE: Is there a case form now that we don't know, that is, you get something added onto that complement to nominative to form the accusative? NORVIN RICHARDS: So maybe that's a way to think about it. And I think this is more or less what several people have been saying. What do you wish the nominative of this was, given our rules? So we wish that they were "wun," "bel," and "dal," right? Yeah, so I'm just going to write that down, "wun," "bel," and "dal." And then if it were that, well, we would be able to get the accusative, and we would need to do something to get the nominative. Yeah, Vlada? AUDIENCE: I want to say something about the fact that all of them are one syllable. [INAUDIBLE] for whatever reason. NORVIN RICHARDS: I think that would be very attractive, so I think that is raising a good point. All of the other words that I've shown you, "mela," "barnga," "katha," "wunda," "thungal," "ketharr," "miyar," they're all two syllables long. These words, if there were words like "wun," "bel," and "dal," they would be our first Lardil monosyllables. Now, here's a fact about Lardil. It doesn't have monosyllables. All of its words are at least two syllables long. So apparently, I think Vlada is absolutely right. Apparently that's being enforced here. So indeed, these words-- a whole bunch of people said versions of this. Let me just channel what you all said-- these words at the end, "rain," "tip," and "curve," they have base forms that don't appear anywhere, in a sense. Their base forms are "wun," "bel," and "dal," and that's what you add the accusative to, by a totally regular rule. They end in consonants, and so the accusative is in. And then we need another rule that says, "Add -da to monosyllables," something like that. So if you're in danger of having a monosyllabic Lardil word, you add -da to monosyllables. Lardil doesn't like words that are only one syllable long. This is what's called a minimal word requirement. They're cross-linguistically quite common. There are a lot of languages in the world that don't like their words to be too short. OK? All right, good. So far, so good. Right, so what we want to do is posit underlying forms, which are often the same as the nominative, but not always. There also these last three that have different forms, OK? Yes? AUDIENCE: Then, just to clarify, "wun," "dal," and "wunda" are all morphemes? NORVIN RICHARDS: Well, OK. "Wun" is a morpheme. "Wunda" up there, stingray species, is a morpheme. I don't know if we want to call this a morpheme or not. This is why I was reacting the way I was before. It doesn't mean anything, right? It's just there so the word can be long enough. Yeah, that's all it's for. So I think linguists would mostly not call it a morpheme. It's kind of like we said for Polish, if a word ends with a "g," the "g' becomes a "k." The change from a "g" to a "k" isn't a morpheme. It's part of something that Polish prefers about its words. It doesn't like them to end with "k." Lardil doesn't like words to be monosyllables, and this is its repair. It's how it fixes that problem. Yeah? AUDIENCE: I'm confused a little. NORVIN RICHARDS: I don't blame you. AUDIENCE: They don't want monosyllables as words, but they create monosyllable morphemes and build whole verbs on those two [INAUDIBLE].. NORVIN RICHARDS: Yeah, I totally see what you mean. So let me just repeat what you just said. These people don't like monosyllables, right? And yet their lexicon is full of monosyllables, right? It's as though, I don't know, somebody else gave them their lexicon, right? And then they're doing their best to improve it in the ways that will make them comfortable with it. Or another way to say it may be when we say they don't like monosyllables, they don't like pronouncing monosyllables. They don't mind monosyllables in their lexicon, but they're not willing to pronounce them that way. They have to do something to them, kind of like Poles don't mind words that end with G as long as they're in their brains, right? It's just saying them. They don't like doing that. They have to do something to them first. Nice point, actually. This is one of many places in linguistics where you get the feeling that languages are designed by committees which have different priorities, that there's one group of people who are like making the lexicon and another group of people who are like-- I imagine them taking reports from the first committee. And then they're like, "What? You gave us what? Look at all these monosyllables." Yeah? OK. So going further into Lardil then. Here are some more words. The word for "fish" is "yaka," the word for "string" is "birrka," and the word for "head" is "lelka." And the accusatives are "yakin," "birrkin," and "lelkin." Is everyone appropriately upset by this? What would we have expected the accusative of "yaka" to be, the word for fish? AUDIENCE: "Yakan?" NORVIN RICHARDS: "Yakan," yeah. What should we do? Yeah, Joseph? AUDIENCE: Is there a set of consonants that you're not allowed to end the word with? NORVIN RICHARDS: Ah, so why do you ask? AUDIENCE: Well, they all end with an "a." NORVIN RICHARDS: So I mean, they don't, right? "Yaka" ends in a vowel and "yakin" ends in an "n," right? Yeah. AUDIENCE: Supposing that they have-- that the underlying form of "yaka" is "yak," but you can't end it with "k," so you have to put-- NORVIN RICHARDS: So these three, if we went from the accusative-- what are these three? "Yaka," "birrka," and "lelka." If we went from the accusative, if we started-- if we did the same trick we did with the last three and said, OK, the nominative, I don't understand why the nominative is what it is. But looking just at the accusative, I wish that the underlying form of "fish" were "yak," right? That would work. So our rule for accusatives would get you the accusatives of "yak." It would be "yakin," and everything would be perfect. And then the question is, why isn't the nominative "yak," to which the answer could be Vlada's answer, it's only one syllable long. But then why isn't the nominative of "yak" "yakda?" And Joseph had a suggestion about that. "Yak," "birrk," and "lelk" all end in a "k." So maybe I've got two general rule here. It's not add -da to monosyllables. It's add "-da" to, I'll say, most monosyllables. Add "-a" after "k." I'll say the same thing over here. Add -da to most monosyllables. Add "-a" after "k." OK? Yeah, Joseph? AUDIENCE: Can you not have two stops in a row? Variable. NORVIN RICHARDS: You cannot have two stops in a row. AUDIENCE: You have to have a vowel? NORVIN RICHARDS: Yup, that would break that. That's right. There aren't any Lardil words that have two stops in a row. Yeah, that's true. Yeah? So yeah, "yak," "birrk," and "lelk," OK? I think this is the last set of Lardil data I was going to show you, if I'm remembering right. No, it's not. There are some others, sorry. "Kanda" is blood, "nguka" is water, "ngawa" is dog, and "karda" is a kinship term. Lardil kinship terminology is extremely different from ours in ways that would take a long time to explain. But "karda" roughly means a woman's child or a man's sister's child. Yeah, no, I won't try to explain that to you. It's very interesting, but I refuse to talk about it right now. So the accusative of "kanda" is "kandun," which it shouldn't be, right? It should be "kandan," if it were going to be anything. Do we want the underlying-- I mean, we have a rule that adds "-da." Do we want the underlying form to be "kan?" No, right? If the underlying form were "kan," what would the accusative be? AUDIENCE: "Kanin." NORVIN RICHARDS: "Kanin," yeah, and it's not. It's "kandun." So we need another rule. What's the new rule going to say? Yes? AUDIENCE: Just make a guess. NORVIN RICHARDS: Yeah. AUDIENCE: You can't end it in "u," maybe. So you'd have to change everything that ends in a "u' to an "a" in the nominative. [INAUDIBLE] NORVIN RICHARDS: Right, so we're going to posit underlying forms-- that's nice-- we're going to posit underlying forms like "kandu," "nguku," "ngawu," "kardu." And then we're going to have a rule, change final "u" to "a." So "nguku," underlying "nguku," gives you the accusative "nguku," takes the "n," and you get "ngukun." But there's another rule that says, change final "u" to "a," that in the nominative changes "nguku" to "nguka." Yeah? OK, good. And then-- yup? All right. And then I think this is the last set of data I wanted to show you. The word for "story" is "ngalu," the word for "boomerang" is "wangal," the word for "kookaburra" is "thalkurr," and the word for "umbilical cord" is "kundul," but the accusatives are "ngalukin," "wangalkin," "thalkurrkin," and "kundulkin." What's the underlying form of these? You guys all know the drill by now. It's like, take the accusative, subtract the accusative suffix. So what should the accusative of "umbilical cord" be? Sorry, what should the bare form of "umbilical cord" be? "Kundulk," yeah? "Kundulk." So we would expect "kundulk," "thalkurrk," "wangalk," and "ngaluk." And there's something that says, drop final "k." So "kundulk," we start with "kundulk," and we form the accusative off of that. But if it's nominative, we get rid of the final "k." Yes? AUDIENCE: What happened to "birrka?" NORVIN RICHARDS: What happens what? AUDIENCE: What happened to "birrka?" NORVIN RICHARDS: I missed it again. AUDIENCE: Why is that not "birrka?" NORVIN RICHARDS: Why is-- AUDIENCE: I think it's one way. NORVIN RICHARDS: Oh, the word for string or whatever it was, "birrka," which started off as "birrk." And we asked, why aren't we dropping the final K? Yeah, you're asking exactly the right question-- or similarly, "ngalu." We just came up with a rule that changes final "u" to "a," right? But the word for story is not "ngala." It's "ngalu." Yeah? AUDIENCE: Do we have to specify that dropping the final "k' applies to underlying forms internal to swap it? NORVIN RICHARDS: So we might have to say something. We either have to work on our rules-- you're absolutely right-- and make them more specific than that. That's one kind of thing we could do. Yes? AUDIENCE: And I'm wondering if there's some kind of like declension system. Like, I'm thinking if like I had to study Lardil-- some kind of declension system. NORVIN RICHARDS: So we couldn't do that for Lardil. We could say, you know what, there are nouns that do one of these, and then there are nouns that do another set of these. So there are rules that apply to particular classes of nouns. We could absolutely say that, yeah. Here's another thing we could do, though. So here's-- just to stick to the case that you were worried about just a second ago-- What did I do with my chalk? I guess I took it over there. We've got words like "birrk," that means string, underlying "birrk." And then we've got words like underlying "kundulk." That means umbilical cord, right? And with "birrk," what we do is add "a" because-- so what we do is add "a." But with "kundulk," what we do is get rid of the final "k." So these two words are undergoing different types of rules, different sets of rules. Or similarly, we had a word, "kandun." Here, let's do blood, "kandun." And that, the "u," changed to "a." So we've got "kanda." But now we've got a word which is underlying "ngaluk." And what's happening there is that you're dropping the final "k." So you've got "ngalu." But you don't change the final "u" to an "a." So we have these rules that seem to apply to certain bodies of examples. And your suggestion, which seems attractive, is maybe these rules need a little bit of work. Maybe we need to get them to be more specific so they only apply to certain kinds of things. But here's another way to think about this, and I'll leave you with this. So here are the rules, some of the rules. So yeah, the word for "head" starts off as "lelk." And we add "a" because of rule two, so we get "lelka." But this is what we're asking, the kind of thing we're asking. You start with "lelk." Why don't you apply the rule four to get "lel," and then maybe rule one, so you'd get "lelda?" Or you start with "ngaluk." You apply rule four, so you're getting "ngalu." Why don't you then apply rule three to get "ngala?" These are the kinds of questions that we're asking. And one kind of response to this kind of problem that people have classically given is to not only have rules but to have the rules apply in an order, that is, to have what's called rule ordering. It's as though there's an assembly line between one of these committees and the other. So rule two and rule three happened before rule four. So if you are starting with "lelk," the basic form for "head," you first apply rule two and get "lelka." Having applied rule two, which adds the "a' after "lelk," you're not in any danger of applying rule three or rule four. And rule three, never mind, but rule four, which would have gotten rid of a final "k," doesn't apply anymore because, well, there isn't a final "k" anymore. Or "ngaluk," which is the base form for story, rule four happens after rule two and rule three. So rule four gets rid of the "k' and gives you "ngalu." And you can imagine now at this point, the rule three people are like, hey, give us that word. We want to get rid of that "u." But it's too late. Rule three has had its time. You don't get to apply it anymore. So this is one way of handling these kinds of problems, is to see, these rules that change things, they're not just rules that change things. They're rules that apply in an order. There's a series of processes that you apply. And you have to apply them in the correct order, and not in the incorrect order, to get the result. Joseph? AUDIENCE: So it's better to do these like a series of rules that are all applied simultaneously, in a sequential if/else statement. NORVIN RICHARDS: Right. AUDIENCE: And when you're done, you need to [INAUDIBLE].. NORVIN RICHARDS: That is a way to think about this, yes, a classic way. We'll talk about other ways to think about this kind of problem, but this is one. All right, let's stop there. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_18_Semantics_Part_2.txt | NORVIN RICHARDS: So we have already talked for a while about various kinds of ambiguity. And today, we're going to talk about ambiguity more. This is, if anything, the best time to talk about ambiguity because we're now actually into semantics. The kinds of ambiguity we were talking about before were kinds of what's sometimes called structural ambiguity-- that is, sentences that had more than one meaning. The way we were talking about it was had more than one meaning because it was possible to draw more than one tree for them. So the sentence that we spend a lot of time talking about was I once shot an elephant in my pajamas, where the whole joke there was that "in my pajamas" could be attached in a couple of different places. It could either modify "the elephant" or it could modify the verb phrase "shot an elephant." And the upshot of that was, well, the pajamas could be on either of you, and yeah-- ambiguity. Similar kinds of ambiguities in these other examples. But there's another kind of ambiguity that's the one I want to talk about today. People have claimed that this sentence is ambiguous-- "someone loves everyone." The claim is this can mean at least two things. It can mean-- let's get some lights on up here in the front. Oh dear, so many options. Stage left, stage right-- sure. Oh, I see. That's what I want. Two kinds of meanings for this that people have posited-- one meaning in which it means the following situation holds everyone is loved. That is, for each person, there is someone that loves them. That's what's diagrammed in that first diagram up there, where the love relation holds between X and A and B-- X loves both A and B, and between Y and C, and between Z and both D and E. So that's a possible reading for the sentence. And then there's another possible reading for the sentence, and then there are arguments about whether this is a different reading or not, in which it means someone loves everyone what that means is there is a person who loves everyone. Maybe it's my grandma-- very loving person. She loves absolutely everyone. And so the love relation holds between X and absolutely everyone, including herself. So that's a kind of ambiguity people have claimed to exist. I don't want us to get too hung up on whether it does. There are clearer examples maybe. Think about a sentence like "Everyone in this room speaks two languages." This is going to be a clearer example because we'll see that the availability of the two meanings can be affected by things that we can do. What's something this can mean? Yeah? AUDIENCE: For every individual, it's true that that individual speaks two different languages. NORVIN RICHARDS: Yeah, so it's possible that it means everyone in this room is bilingual. So maybe I speak English and Spanish, and you speak Ukrainian and Polish, and you speak Mandarin and Japanese. So everybody in this room is bilingual. It could mean that. What's another thing it can mean? Yeah? AUDIENCE: If you put together all the languages that everyone in this room knew, it would only be two. NORVIN RICHARDS: Right, exactly. So if you took a survey of everybody in this room and found out all the languages that they speak, there would be two things, two languages that we all have in common. Maybe we all speak English and we all speak Tagalog. Maybe some of you speak 12 languages, but two of them are English and Tagalog. That's another thing that it could mean. Yeah? AUDIENCE: I don't actually read the sentence that second way. If everyone in this room speaks two languages, as in collectively, everyone in this room speaks two languages. That just doesn't-- I don't read it like that. NORVIN RICHARDS: That's not the first reading you get for it. Yeah, I agree with you, actually. So what you're claiming is that if I were to say "Everyone in this room speaks two languages, namely English and Tagalog," your interpretation would be that everyone in this room is bilingual in English and Tagalog. (Tagalog is a language from the Philippines.) It's a lot easier to get that reading, I agree with you, than it is to get the other reading, where it just means everyone in this room is bilingual. Yeah? AUDIENCE: I think I would have [INAUDIBLE] something to me in total. NORVIN RICHARDS: Everyone in this room speaks two languages between us. Yeah, you feel as though there could be modifications that you could make that would make that reading. Here's one modification you can make that makes that reading come out. You can passivize the sentence. So "Two languages are spoken by everyone in this room," I think has that reading really clear. There are two languages that we all have in common. It's possible that it can also mean the other thing, that it can also mean everyone in this room is bilingual, but at least can mean that. The fact that we can have this sensation that these two imaginable readings are maybe both available for both of these sentences, maybe not, but certainly one is more available than the other depending on whether the sentence is active or passive-- it suggests we want to be able to talk about this kind of difference in meaning between these sentences. So we're going to develop the tools to do that today. So in order to do that, we're going to need to develop a theory of a certain special kind of noun phrase-- what's called a quantifier. And to do that, we're going to start by talking about what noun phrases mean more generally. So here's a noun phrase-- refers to a person who's the TA for some of you. And if you were to ask, what does that noun phrase mean? Well, we talked about this a little bit. I had the example of there are various things that you could use to refer to me. You could refer to me as "Professor Richards" or as "Norvin" or "that so-and-so who gave me a C"-- and we're talking about that. So similarly, here's a phrase that if you ask what it means, you might expect that if we look this up in your mental lexicon, what we would find is a picture of this guy-- those of you who have him as your TA, at least. So if you don't have him as your TA, maybe you don't know who he is, but he's one of the TAs. And to say, for example, "Enrico Flor is an avid hangglider" is to say something like-- we ask what that means, it's going to mean something like-- we'll have to figure out the meaning of avid hangglider. What is it to say that someone is an avid hangglider? But when we figure out what that is, we'll have a list of all of the people who are avid hanggliders. And this claim is that list will have Enrico Flor's name on it. That's what that means. So that's a comparatively simple meaning for a noun phrase. Enrico Flor-- that noun phrase refers to that guy. What if I wanted to tell you, the 24.900 TAs are avid hanggliders. So again, we're going to have a list of the avid hanggliders. What does this sentence say? What does it mean? AUDIENCE: You first have to parse what the 24.900 TAs are and decompose that into a list of people and match them up with who are the [INAUDIBLE]. NORVIN RICHARDS: Yeah, exactly. That's a nice way to say it. So we're going to have a list of the avid hanggliders, and we're going to have a list of the 24.900 TAs. It'll have Enrico Flor, and it'll have Yash Sinha, and it'll have all four of the TAs. And what this says is if you look at that list of avid hanggliders, it'll have all four of those names on it. I'm trying to lure you into a false sense of security here. So far, this should seem pretty simple. So yeah, there's a list-- set. I've made it a set here. It's a set that contains Enrico and Peter and Yash and Anton. And what we're saying is that set is a subset of the set of avid hanggliders. What set does "every Italian"-- Enrico Flor is Italian-- what set does "every Italian" refer to, if we're going to let noun phrases refer to sets? Yeah? AUDIENCE: Either every person such that that person has Italian citizenship or Italian ancestry? NORVIN RICHARDS: Yeah, so you're being very careful. But yes, so somehow we'll have to define what we mean by Italian. You're right. That's right. And then we'll end up with a list of Italians. And then that'll be the set. So similarly, if you say "Every Italian is an avid hangglider," what we mean is it's kind of like "The 24.900 TAs are avid hanggliders." Everyone in this list is also in this list. Everyone in this set is also in this set. That's what that means. So "every Italian" will have a list of Italians-- that's how the list could start. And you're saying everybody in that list is an avid hangglider. OK, so far, so good. How about "No Italian?" What set is that? Yeah? AUDIENCE: It's like if you took the set of all people, but you took out all of the Italians. NORVIN RICHARDS: Subtracted the Italians. AUDIENCE: It would be the partition of all people who's not Italian. NORVIN RICHARDS: But if it means that-- no, that's a nice idea. So we'll take the set of people, and we'll subtract from it all of the Italians, everybody who was on the list of Italians. But then what is "No Italian is an avid hangglider" going to mean? Yeah? AUDIENCE: Just means that of the list of people that you identify as Italians, none of them will be found in the list of avid hanggliders. NORVIN RICHARDS: So we want it to mean that. But if it meant what Raquel-- if no Italian meant what Raquel wanted it to make-- I'm sorry, Raquel, I'm picking on you because you were smart enough to make a suggestion-- it would mean everyone who is not Italian is an avid hangglider. And that's not what "No Italian is an avid hangglider" means, I think. Sorry, what were you going to say? AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Say that, yes. Yeah, I'm sorry, I should start calling on you guys instead of-- yes, Raquel? AUDIENCE: If there is an avid hangglider, it has to be part of the subtracted-- like the set that doesn't have Italian-- like every single one must, but if there is going to be one, it better be in that. NORVIN RICHARDS: All the other hang gliders have to be in the complement set of the set of Italians, something like that. Yeah? Yeah, Joseph? AUDIENCE: Instead of Italians, instead of hanggliders, the intersection is a null set. NORVIN RICHARDS: Yeah, so I think what we're saying, what we're getting to and I think you said this more or less the same way, every other set that we've talked about, when we said "The 24.900 TAs are avid hanggliders," we said, yeah, there's a set of 24.900 TAs and there's a set of avid hanggliders, and the first set is contained in the second set. That's what that means. And every Italian is an avid hangglider-- there's a set of Italians and a set of avid hanggliders, and the first set is contained in the second set. But "no," as in "no Italian," makes you have the sets interact with each other in a different way. It says this first set is not contained. Nothing in this first set is contained in the second set. So this is a popular way of talking about the meanings of what are called quantifiers-- words like "every" and "no." So it's not the null set. It's not a set containing no Italian. A popular way of talking about the meanings of these kinds of expressions-- these are called quantifiers, these expressions which do fancier things with sets than just say, oh yeah, this set is inside this set-- a popular way of talking about them is to say they do fancier things with sets. They allow you to do more interesting things with set interaction than just say this set is a subset of that set. That's what they do. That's what they're for. Quantifiers have a bunch of interesting properties. They're weird in other ways too. So, for example, there is something called the Law of Contradiction. If-- the Law of Contradiction says if you take two predicates that are contradictory, like "be inside" and "be outside," if you join me in imagining that it's not possible to be both inside and outside-- and I can see some of you thinking of alternative ways of thinking about the world in which-- but suppose I'm halfway inside and halfway outside. Which am I? I'm in the Department of Linguistics and Philosophy, so I'm used to hearing people talk like that. But please stop. Just imagine that you're either inside or you are outside and that's all. And imagine that Paul is a single person. There's only one person on Earth named Paul. So then "Paul is inside and Paul is outside" cannot be true. That's the Law of Contradiction. So if you have two predicates that are opposites of each other, cannot both be true of a single person, then if you apply them to a single person, then the sentence has to be false. That's what that first example is. It's an example of the Law of Contradiction. But there are quantifiers like "several Americans," which just flagrantly violate the Law of Contradiction. So "Several Americans are inside and several Americans are outside"-- fine. No problem. And you don't have to play games with whether if you're halfway inside or halfway outside or you're inside the building but you're outside the campus or something. Just forget about all that. There are also quantifiers that fail what's called the Law of the Excluded Middle. The Law of the Excluded Middle says if you have two predicates like "be under 6 feet tall" and "be over 5 feet tall," then a sentence like "Takashi is under 6 feet tall or Takashi is over 5 feet tall" has to be true. We don't have to measure Takashi to find out whether that's true because the set of people who are under 6 feet tall and the set of people who are over 5 feet tall overlap. If you have both of those sets together, they cover all people. Any height that someone is, they are at least one of those two things, possibly both. Someone who's between 5 feet tall and 6 feet tall is both. And someone who's under 5 is under 6, and someone who's over 6 is, well, over 5. That sound right? These are two predicates such that here's 5 feet tall. Here's 6 feet tall. We're talking about people who are under 6 feet tall and also the people who are over 5 feet tall. Well, we've covered everybody. So we don't have to look at Takashi. We don't have to know who Takashi is. That first sentence is true-- Takashi is under 6 feet tall or Takashi is over 5 feet tall. But the second sentence is false, or at least doesn't have to be true. So "All Japanese men are under 6 feet tall or all Japanese men are over 5 feet tall." That could be false. Can somebody name a context in which that would be false? I'm asking you to do it because I can't do math while standing up here. Yeah, Joseph. AUDIENCE: There happened to be a particularly tall-- someone who's over 6 feet, or particularly short under 5 feet [INAUDIBLE],, and the population of Japanese men are [INAUDIBLE]. NORVIN RICHARDS: Yeah, there we go, good. So as long as you have at least one Japanese man who's over 6 feet tall and at least one Japanese man who is under 5 feet tall, the rest of the Japanese men can all do whatever they want. The sentence will be false. Good. So quantifier phrases, quantifiers, quantification expressions, like "all Japanese men" or "some Italians" or "no Italians," fail these otherwise reliable generalizations about sentences. So expressions like "no Turks" or "several Americans" or "all Italians" or "most Ukrainians" don't refer to sets of people. What do they mean? Well, what they do is they-- already said this-- these words at the beginning, like "no" or "several" or "all" or "most," are doing cool set theoretic things with the set that they are combining with. So the set of Turks or Americans or Italians or Ukrainians, they're causing that set to interact in interesting ways with a set that's determined by the rest of the sentence. In order to talk about this, let me do a quick review of set theory. You're not going to need to very much that theory in order to do this. I tell you this because I don't very much set theory, and I can do this, so I'll just show you what I'm going to show you here. Here are two sets, pi and phi-- a Venn diagram. It's hopefully not too unfamiliar. Two sets, pi and phi, which have an overlap that contains D and F, and then there are also things that are only in pi and not in phi, like A and B. And there are some things that are only in phi and not in pi, like C and E. That sound right? People are all familiar with Venn diagrams. We say that D and F is the intersection of pi and phi, and that little hoop is the thing that you use to express intersection. And we say that A, B, C, D, and F is the union of pi and phi, where I have, for some reason, used the word "of" twice. I'll try to fix that before I put up the slides. And the little thing that looks more or less like a U is the expression for union. We also say that A, B, and D is a subset of pi, and there's a symbol that's used sometimes for subsets. None of this seems alarming-- subsets, union, intersection? OK, so here's the popular answer-- which we've now already gone through for quantifier meaning-- what we say is when you're saying something like "All Americans eat junk food" is you are asserting a relation between the set of Americans and the set of junk food eaters. You're saying something about what happens when you intersect those two sets. And depending on what quantifier you're using, you're making different assertions about the relation between these two sets. So "all" says set number one is a subset of set number two-- what some say. Yeah, [INAUDIBLE]? AUDIENCE: The intersection is not a null set. NORVIN RICHARDS: Yeah, the intersection is nonempty. So there are things that are in both sets. "No" says the intersection of set one and set two is empty-- "No Americans eat nattoo"-- are people familiar with nattoo? Have we talked about nattoo in this class? Yes, some of us are from nattoo cultures. Nattoo is a Japanese food. It's very good, and it's good for you. It's high in protein, but it's one of those kinds of food-- I think a lot of cultures have something like this. It's a kind of food one of the points of which is to feed it to outsiders so that you can watch and be amused. So this happened to me when I was living in Japan. We were visiting some the family of a Japanese friend of mine, and the mother made me up a big dish of nattoo and they put it in front of me, and the entire family gathered around to watch. So you see what is he going to do?-- Nattoo is fermented soybeans. So the soybeans are covered in a thin layer of slime, and then you mix this up with rice and often mustard and soy sauce. It's really tasty. It's really good, but it's also very messy. So if you're like me and you have facial hair, by the time you're done, your entire face is covered in nattoo slime, so you have to go take a shower after you eat. But it's good for you. Tasty stuff. So it's not true that no Americans eat nattoo, but it's grammatical. And what it means is the set of Americans-- you're looking at the set of Americans and the set of nattoo eaters, and you're saying the intersection of those sets is empty. There's nothing that's in both of these sets. OK, I don't know whether we have a food like that in our culture, a food that you feed to people in order to be amused by their attempts to eat it. This is something to think about. I can tell you that the Tagalog version of it though. I'm sorry, I'll start talking about sets again in a second. The Tagalog version of this is something called balut. Have I talked about it in this class? You've had balut too? You're an adventurous person. AUDIENCE: I have not had [INAUDIBLE].. NORVIN RICHARDS: Not balut, OK. AUDIENCE: --a whole other level. NORVIN RICHARDS: Yeah, balut is a duck's egg that has been allowed to be fertilized. It's a popular food in the Philippines, so people sell it in the street. You have balut sellers who walk around saying, balut [NON-ENGLISH],, which means "Balut strengthens your knees." That's apparently the standard thing you say. And balut is a duck's egg that's been fertilized and then hardboiled before it hatches. So you've got a hardboiled duck's egg with a duck embryo inside, which you're supposed to eat. When I was living in the Philippines, my host brother got me got me a balut. And we were in a dark room. He handed me the balut, and I went to turn on the light he was like, no, just eat it. And I unfortunately turned on the light, so I chickened out. I looked at it, and there's a little duck looking up at me. Pretty weird. So quantifiers-- so that's their version of that. Quantifiers then-- quantifiers like all or no or three-- are saying things about the interactions of the two sets. So "all" says the first set. So in a sentence like "All Americans eat nattoo," the first set is the set of Americans and the second set is the set of nattoo eaters, "all" says the first set is a subset of the second set. "Some" says the intersection of these sets is nonempty. "No" says the intersection of these sets is empty. "Three" says, if you look at the intersection of these sets, you will find three things. That has cardinality three. That's the kind of meaning that a quantifier has. If you're already familiar with set theory, it's obviously interesting to try to think about what kinds of intersections between sets quantifiers can state. So mathematicians can get sets to tangle with each other in all kinds of entertaining ways. I mean, there are all kinds of things you can get sets to do. It turns out that natural language quantifiers are what's called conservative, which means that you can always replace set number two with the intersection of set number one and set number two and get the same meaning. That is to say-- maybe I'm about to do this on the slide, but here, we'll do it on the board-- if this is set number one and this is set number two, I've been talking about quantifiers as though they tell you something about this set and this set. But all of the quantifiers can be stated in terms of this set, set number one, and this area here, the intersection of set number one and set number two. This part of set number two is always irrelevant to the meaning of a quantifier. That's a fact about natural language quantifiers. And it's not a necessary fact. So here's an example, if I say "All opera singers smoke," which I believe to be false, but anyway, I'm making a claim about the relationship between the set of opera singers and the set of smokers. I'm saying that the set of opera singers is completely contained in the set of smokers. But that's essentially the same thing. It is the same thing as saying that all opera singers are opera singers who smoke. That is, opera singers are all in the set of smoking opera singers. We don't care about smokers who are not opera singers when we are thinking about the meaning of "All opera singers smoke." So if I tell you "All opera singers smoke" and you begin to talk to me about someone who smokes and is not an opera singer, which may be fun to talk about that, but it's not relevant to the truth of my claim. The truth of my claim is only about the set of opera singers and the set of smoking opera singers. It says that the set of opera singers is completely contained in the set of smoking opera singers. Does that make sense? So if I say "All opera singers smoke" and you tell me, "Oh no, Arnold Schwarzenegger smokes, and he's not an opera singer," you haven't contradicted me. It doesn't matter. You can imagine quantifiers which would not be conservative. So I've just made one up. Here's a quantifier, "glorp." It says, if you add up all of the things in set number one and set number two, so the union of set number one and set number two has cardinality three. I just made up glorp. I'll give you an example. It would be true, in this picture, that glorp circles are red because if you add up the number of circles and the number of red things, the total is three. There are two circles and one red thing. So if there were a quantifier glorp, then you could say that glorp circles are red in this picture. There is no quantifier glorp, not in English and not in any language on Earth as far as we can tell. So quantifiers don't ever do this kind of thing. And this would be a nonconservative quantifier because in order to evaluate glorp, you would have to look not just at the part of set number two that's intersecting with set number one, but also at the part of set number two that I've hatched out here, the part that's not part of set number one. So this is a nonconservative quantifier, and there aren't any. That's the point. OK, good. So just practice some more-- so "all Brazilians love soccer." There's this danger when you do simple sentences with qualifiers in them that you will find yourself trafficking in stereotypes. This is why I like to use examples like "All opera singers smoke," which there isn't a stereotype about as far as I know. So, of course, it's not true that all Brazilians love soccer. It might almost be true. It says there's a relation between the set of Brazilians and the set of people who love soccer-- namely, the first set is a subset of the second set. If you look at the set of people who love soccer, inside that set, you will find all of the Brazilians, the people in the Brazilian set. Let's talk a little more about how we get these sets, just to be slightly more formal about it. So what we'll say is the set of Brazilians-- maybe that's not such a mystery. So all Brazilians love soccer-- you're taking the set from the rest of the noun phrase. So "All Brazilians love soccer"-- the first set is just the set of Brazilians, the thing that comes after all that's part of the noun phrase there. If it were more extensive than just Brazilians, then that set might change. So if I said something like "All female Brazilians love soccer," "All Brazilians from São Paulo" love soccer, we'd have a different set, Brazilians from São Paulo. And then the second set-- what we're doing is we're taking the set of things x such that x loves soccer. We're getting that set by replacing the quantifier "all Brazilians" with a variable. So we start off with "All Brazilians love soccer." We're going to replace "All Brazilians" with a variable called x. So x loves soccer. So the second set is the set of x. We have this property x loves soccer. This is just an attempt to do very slightly more formally what I've been kind of breezing through in the preceding slides. But does any of this seem alarming or disturbing? Is it causing unhappiness? Have you all had your nattoo this morning? This is good-- maybe I should bring some nattoo and I can stand around and watch. It would be revenge. I will not bring a balut to class. Not sure where I would get one. So it's worth doing this because doing this slight formalization of what we've been doing because, well, I've have been carefully giving you examples in which the quantifier is a subject, but, of course, quantifiers don't have to be subjects. You can say things like "Soccer bores all Americans." Here, the first set is going to be the set of Americans. What's the second set? So "Americans" as a subset of what set? Again, what we do is-- sorry, go ahead. AUDIENCE: X such that soccer bores x? NORVIN RICHARDS: Yeah, x such that soccer bores x. So we take this sentence, we replace "all Americans" with x, and now we have the set of x such that soccer bores x. X doesn't have to be in subject position. That's the only point here. That can be anywhere. So that's how we'll do the meaning of qualifiers. So it says "Americans" is a subset of the people whom soccer bores, the x such that soccer bores x. So the quantifier has been replaced with the variable. So now we can get back to the ambiguity that I started us off with, and I will try to do this slowly and carefully. What we're going to see is that a way of thinking about the ambiguity that I started this off with is to think, yeah, you have these two qualifiers that are doing these operations involving the formation of sets. So in all of the sentences I've given you before, there's only been one quantifier, so it forms these sets and asserts a relation between them. But in a sentence like this, where there are two qualifiers, well, you're going to need to perform that kind of operation twice. And the ambiguity just has to do with the order in which you perform the operations. So do you do the operations for "some child" first and then the operations for "every puppy," or do you do the operations for "every puppy" first and then the operations for "some child"? So if I remember my own slides correctly, I think what we're going to do now is work fairly slowly and painfully through what happens if you do the two things in that order. But I'm telling you this now in advance to give you some hope that you will have an understanding by the time we're done of what the heck is going on here. It's just what order do you do the operations in. That's why there's ambiguity here. You have these two qualifiers and you have options about which one to interpret first. So if you interpret "every" first, then you're saying the set of puppies is a subset of the set of things that some child loves. And then, if you interpret that part, you're going to say, what's the set of things that some child loves? Well, it's the set such that the intersection of the set of children with the set of things that loves them is nonempty. So if we interpret "every" first, then the sentence means the set of puppies is a subset of the set of things such that the intersection of the set of children with the set of things that love those things is nonempty-- whew. If we do the operations in that order we get that reading. That is, we get the reading the set of puppies is a subset of the things that some child loves. Some child loves many things, possibly, and among those things is the set of puppies-- so every member of the set of puppies is such that the intersection of children such that there is some child that loves it. And I think I have a picture of that now-- yeah, there. So doing the operations in that order, we get this reading. Every member of the set of puppies has this property-- there is some child that loves it. That's a reading the sentence can have, and we can get this reading by doing the interpretation of every puppy first. That's one reading the sentence can have. So that's the reading that's pictured here, where there are many puppies and many children, and every puppy is loved by at least one child, possibly more than one. There are also children who love more than one puppy. Children are not monogamous when it comes to puppies. What if you interpreted "some child" first? Well, then you'd be saying take the set of children and the set of things that love every puppy. That intersection is nonempty. So take a deep breath because we're about to interpret the second part of that, but even before we interpret the second part of that, maybe this is the point at which things are as coherent as they're ever going to get. What this means is there is at least one child that loves every puppy. There's at least one child who's very fond of puppies. That's the other kind of reading we were saying this kind of sentence could have. So we next interpret that. So the intersection of the set of children and the set of things such that the set of puppies is a subset of the things that that thing loves is nonempty. And to put that back in English, it means there is at least one child such that the set of puppies is a subset of the things that x loves. Or to put it another way, there is at least one child such that all puppies are loved by them. So early in this-- maybe one of the first things I said today was a sentence like "Some child loves every puppy"-- or I think the example back then was "Someone loves everyone"-- people have argued that it's ambiguous, that it can mean either there is one person who loves everyone or everyone has someone that loves them. And these are both meanings that the sentence can have. And I switched over to people and languages because our intuitions about the difference between the two meanings are a little sharper for that kind of example. Here, I'm showing you some mechanics that we can use to get this ambiguity, and the mechanics involve allowing yourself to do the operations for interpreting the quantifier-- when you have more than one quantifier in a sentence, you're allowed to interpret them in either order. So you're allowed to do the operations for the subject first or the operations for the object first, and depending on which of those things you do, you get these two different readings for the sentence. So far so good? Sorry, this is a lot to deal with on a Tuesday morning through a mask. So here's the tree for that sentence "some child loves every puppy." And we've said that's an ambiguous tree. So why is the tree ambiguous? What I've now said a couple of times-- and who knows, this could be true-- is yeah, you've got these two qualifiers. You've got to perform operations that form these sets. The part of the process of interpreting these qualifiers involves forming these sets and there's nothing in particular telling you what order to perform those operations in. And so you can interpret them in either order, and so you get ambiguities. That's the way I've been talking. That turns out to be a lie, which is kind of interesting. We have lots of good evidence, and I'll show you some of it, that we don't just get to freely choose the order in which we interpret qualifiers. But what really happens is that there is another kind of movement operation. So we've left syntax behind, and maybe some of you were hoping we wouldn't have to see things move anymore, but actually, in semantics, there's also movement going on, possibly movement of an even sneakier and weirder kind. There's lots of evidence that the reason that the sentence is ambiguous is that there's an operation which is optional that takes the object "every puppy" and moves it to a position above "some child" in the process of interpreting the sentence. Now there are several alarming things about this movement. All of the movements that I've shown you up until now-- so when I first tried introducing you to movement, I think I was showing you things like "Mary devoured a pizza." I was saying yeah, look, the pizza absolutely has to be here. You can't say "Mary devoured." That's out. So "devour" selects for a sister-- it absolutely has to have a sister. And moreover, its sister absolutely has to be right next to it. So if I put an adverb here, it gets bad. So Mary devoured quickly a pizza-- that's no good. So we know something about the properties of the "devour." It absolutely has to have an object, and the object absolutely has to be right after. It cannot be anywhere else. And then I said, wait, what about "what did Mary devour?" And we ended up deciding that "what" is starting off here and it's moving up to here. And that's why it's over there. So we said the process of producing a sentence like "what did Mary devour" involves "what" starting off exactly where it should-- as the sister of "devour" right next to "devour"-- and then there's this other process that doesn't care about what "devour" wants. This process of wh- movement that takes "what" and moves it to the beginning of the sentence, we saw. Some languages have this and others don't. I was looking at WALS the other day. It turns out roughly a third of the world's languages have this, and English does. So you take what and you move it to the beginning of the sentence. So every kind of movement that we've seen up until now has been like this. It's been part of an explanation for why you pronounce things in different places than you would expect. You would expect "devour" to have a sister, to have an object that would be right after it. And "what" isn't there, and it's because of a movement operation. I'm going to give you some evidence now that this ambiguity shows up because you have the option of moving "every puppy" to a position to the left of "some child," higher than "some child" in the tree. But this had better be a different kind of movement because the sentence is ambiguous even if you don't see the noun phrases going anywhere else. This is a type of movement called covert movement, and you should all be very suspicious of this because basically, it's like the-- what's it like? It's like the dark matter that physicists have to posit in order to explain why most of the matter in the universe doesn't seem to be there. So there's all this undetectable matter that's out there. We just haven't detected it yet. Somehow, when physicists say things like this, we believe them because they can build bombs and things. But I'm about to say something kind of similar to that. We have some arguments, really good ones-- I'll try to show you some of them, anyway-- that this thing can move. And the fact that when it moves, you still pronounce it as though it hadn't moved, this is really interesting, but what it shows is that this is dark matter. It is moving. It's just not moving in a way that changes the order of the sentence, order of the words. All of you are making the appropriate face, which is a very skeptical one. I mean, some of you are wearing masks, so I can only see your skeptical eyebrows and eyes, but I appreciate that. Let me show you some arguments. First, there are some languages-- yeah, so here's another ambiguous example. "Most people ate two cakes." That can mean two things. It can mean either-- what's one thing it can mean? Softening you up for the next set of slides. You don't have to use x's and y's and things. Just tell me what it seems to mean. Yeah? AUDIENCE: Most of the people in the room ate two cakes for each person. NORVIN RICHARDS: For each person. So there were a whole bunch of cakes, and there was a free for all. People could have as many cakes as they wanted, and we were keeping track of how many cakes people wanted, and most people ate two cakes. Some people ate three. There was this Richards guy who ate 12. And then some people didn't eat any. But most people ate two. That's one thing it can mean. What's another thing it can mean? Yeah? AUDIENCE: There are a total of two cakes from which most people have eaten. NORVIN RICHARDS: Yeah, so there were various cakes out there, maybe really big cakes that you could eat a slice of, and we were finding out what the most popular cakes were, and most people ate two cakes, the vanilla frosted and the chocolate frosted, and then there were a couple of other cakes that were not as popular. So concentrate on those. It can kind of mean that. Some of you were raising your eyebrows at that. But it can sort of mean that. So one reason to take seriously the idea that this ambiguity comes from an optional movement operation which you can't see is that there are languages in which you can see it. So you remember how I said English has wh- movement and there are languages out there that don't, languages like Mandarin or Japanese or Chaha or whatever? Similarly, this operation that takes one quantifier and moves it past another quantifier-- you don't get to see it in English, but you do get to see it, for example, in Hungarian. So in Hungarian, the Hungarian translations for "Most people ate two cakes" have two word orders, each of which means only a single thing. So you either say, in Hungarian-- I don't speak Hungarian and I won't try to say these sentences. Does anyone here speak Hungarian? OK, that's very freeing. I almost feel as though I should try to say these sentences. I know you don't know what Hungarian is supposed to sound like, but I won't. The first sentence only has one meaning, and the second sentence also has only one meaning. So you can take these expressions "most people" and "from two cakes" and put them in either order, and depending on what order you're in, you only get one reading. So the first sentence means the first reading that was offered, the most reasonable reading, most of the people ate two kinds of cake. The second meaning means there were two particular cakes that were the most popular ones. So in Hungarian, the order in which you apply the operations to interpret the qualifiers is not optional. It's completely fixed. It's determined by the order of the phrases. That's what determines what's going on. And the claim is that all languages are Hungarian deep down. It's just that some languages are better at this than others. So there are languages like Hungarian, for example, that are really good at being Hungarian. You can just see it right there. And then there are other languages, like English, that are Hungarian but they're shy about it. And so you don't get to see the relevant kinds of movement. OK, what are we doing? So I'm going to show you some more reasons to take this idea seriously that this operation, which is called quantifier raising, or QR, that it exists. So we'll talk about some of the properties of it. We'll talk about some of the properties of it in the next little while. Here's another sentence with two qualifiers in it. This is work that I'm taking from my colleague Danny Fox, who's done a lot of fantastic work on the properties of quantifiers. So here's an ambiguous sentence, "A guard is standing in front of every building." What does this sentence mean? Yes? AUDIENCE: Either that every building has a different guard standing in front of it, or there is one guard that is continuously standing in front of every building. NORVIN RICHARDS: Yeah, there we go. So it has a sensible reading-- every building is guarded. And it has a reading involving a very large guard, or maybe lots of really small buildings or something like that. Oh yeah, no, let's imagine that there's more than one building. So those are two things the sentence could mean. So yeah, here are a bunch of buildings, a bunch of guards, one guard per building, or a bunch of buildings and one guard who is kind of wide. So it could mean either of those things that's it. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Ah, yes, so if the buildings are all in a circle and they're all facing each other, there could be one guard in the middle. A guard is standing-- he managed to stand in front of every building by standing right in the center spot. Yes, I suppose it could mean that. I think that's right. Let's discount that reading. So here's an ambiguity, the kind of ambiguity we've been talking about. You've got these two qualifiers and you can interpret-- so the first pass on why we got this kind of ambiguity was there's this process of interpreting qualifiers where you figure out what the sets are that the quantifier is relating, and you relate them to each other. And if you have a sentence that has two qualifiers in it, you get to do that process-- either you've got to quantifier, quantifier number one and quantifier number two. You either apply that process to quantifier number one first or quantifier number two first. And then I asserted and offered you, so far, only evidence from Hungarian. I asserted that there is actually this operation of quantifier raising that has the option of moving one quantifier to a higher position than the other quantifier, and that the order in which you perform the operations is actually fixed. It's completely determined by the structure that you get after you've done quantifier raising, if you have. So what I'm going to do in the next few slides is show you some reasons to take that seriously, even for English. So here's a sentence that has ambiguity and the story was going to be it's ambiguous because you have the option of leaving "every building" where it is or the option of moving "every building" so that it's higher than "a guard." And if you take the option of moving "every building" so that it's higher than "a guard," invisibly, without changing the word order-- very weird to try to understand what's going on-- then what you do so the generalization is that you must interpret the highest quantifier first. So if you do QR of "every building" higher than "a guard," what you're saying is you're going to start by relating the set of buildings to the set of things that a guard is standing in front of. And you're going to say that first set is completely contained in the second set. And then you will interpret the quantifier of "guard." So you have this optional operation of QR, and there isn't actually any optionality in the order in which you interpret quantifiers. You always interpret the highest quantifier first. That's going to be the story. The optionality has to do with the optionality of the application of QR. So here's a sentence with two qualifiers in it, and it's ambiguous. And I'm asserting it's ambiguous because you can do QR. Here's another sentence with two quantifiers in it: "A guard said that I should stand in front of every building." Now is this ambiguous in the same way? No. It doesn't mean-- so what does it mean? Joseph? AUDIENCE: The guard thinks that I have to be particularly-- NORVIN RICHARDS: Large. AUDIENCE: --to be able to stand in front of all [INAUDIBLE].. NORVIN RICHARDS: Yeah, exactly. There is one guard who demands that I gain weight-- wants me to be wide enough to stand in front of every building. Yeah, Faith? AUDIENCE: I interpreted it instead as move from one building to the next one. NORVIN RICHARDS: Yeah, that's a more reasonable thing for the guard to ask me to do, isn't it? And I guess it helps that we've switched here to "That I should stand in front of every building" rather than-- so what we had before was a guard is standing in front of every building. I guess we could interpret that as meaning that there's a single guard who is walking from building to building and standing briefly in front of them. These are all ways of getting a single guard to do all the work. There's only one guard. Yeah, you're absolutely right. But it doesn't have the other reading. It doesn't mean there was one guard who wanted me to stand in front of the Stata Center and another guard who wanted me to stand in front of the Student Center, and another guard who wanted me to stand in front of the MIT Museum. It doesn't mean that. All right, why not? Because we've got two qualifiers. Why can't we interpret these qualifiers in any order we want? Well, if we believe in QR, we have an answer to that. We get to say what's special about this example and distinguishes it from the examples we had before is that "a guard" is in the matrix clause, and there's an embedded clause, an embedded CP, that I should stand in front of every building. Yeah, that's an embedded clause. Is that clear? Should I draw a tree? So say it's taking, as its sister, a CP that I should stand in front of every building. And what we're learning is that although there is this optional operation, QR, that can get one quantifier passed to another, it can't get you out of a clause. We're going to have to figure out what counts as a clause, but apparently this counts as a clause. There's some limitation on how far you can QR. How about this one? "A guard seems to be standing in front of every building." Is that ambiguous? I think so. So it means either "Looks like every building is guarded" or "Man, that guard is wide!" or "Man, that guard is running hard to be in front of every building at once!" or "Look how clever that guard is at geometry. He's managed to stand in front of every building at once!" (Lucky thing for him, they all face each other.) So it has two meanings, one with a very wide or clever or fast-moving guard, and then the other more normal reading where it's like every building has a guard. So either every building is such that there's a guard standing in front of it, or there is a guard who is somehow standing in front of every building. So yes, this is ambiguous. How about this one? "I seem to a guard to be standing in front of every building." Is that ambiguous? No. This is a sentence about one guard who needs rest, who needs to be taken off-duty and convinced to get some rest. This is a guard who thinks that, I don't know, maybe he goes to one building and I'm there and he goes to another building and I seem to be there too, and he's seeing me everywhere and this guard has problems. And besides-- yes? AUDIENCE: Is it just because "a guard" could be referring to multiple guards whereas I-- you can't have more than one I? NORVIN RICHARDS: But the question still remains, the other reading-- you're absolutely right. But the other reading that this could have would be one with something like every building has this property: I seem to a guard to be standing in front of it. That would be a reading that would be, like, "For this building, there's a guard who thinks that I'm standing in front of it. And for this other building, there's another guard, maybe a different guard, who thinks that I'm standing in front of it." That's a story where there are lots of guards who don't necessarily believe that I am simultaneously standing in front of lots of buildings. They just all-- their hallucinations are more simple than that. They just all see me in front of one building, one particular building, maybe the building they're supposed to be guarding. All of them are taking the day off because they're like, oh yeah, that Richards guy, he's guarding that building. Yeah? AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yeah? AUDIENCE: Somehow it didn't register to me that you were the one-- not the guard. NORVIN RICHARDS: Yeah, so that's what the second sentence means. But the point is that it only means one thing. It means there is one guard who thinks that I'm standing in front of every building. It doesn't mean every building is such that there is some guard who thinks I'm standing in front of it. Doesn't mean that. Yeah? AUDIENCE: Could it be that a guard [INAUDIBLE]?? It's a singular phrase, but it could refer to any number of people, whereas "I seem to a guard" means that there is one entity that "a guard" refers to and it can't be anyone else. There is a person who has this quality that saw you. NORVIN RICHARDS: Yeah, so that's-- I think there was a related comment earlier, and I think that's onto something, but it's describing the problem that we have. If it's "A guard is standing in front of every building," or "A guard seems to be standing in front of every building," then yeah, "a guard" can either be one guard that's in many places at once or it can be many guards, one for each building. That's the original ambiguity. "A guard" has that power that it can do that. But why can't it do it in the second sentence is the question? Why can't the second sentence mean there are three guards-- let's call them A, B, and C, and guard A thinks that I'm standing in front of building number one, and guard B thinks that I'm standing in front of building number two, and guard C thinks that I'm standing in front of building number three. Why can't it mean that when the first sentence can mean A seems to be standing in front of building one, B seems to be standing in front of building two, C seems to be standing in front of building three. We can play that game, the alphabet and number game, with the first example, but not with the second example. We're trying to figure out why. Yep, I'm trying to figure out why. Here's a plausible story about why. We went through this very fast, so I don't blame anybody if you don't remember this. But when we were doing "A guard seems to be standing in front of every building," sentences like that-- what we said was that there is a movement operation going on in sentences involving "seems," this kind of sentence involving "seems," where "seem" is followed by an infinitive. What's going on is that "the guard," the thing that ends up in subject position, starts out in the embedded clause. Here is a guard, and it's raising, moving up to here. There's NP movement-- so a movement. I'm sorry, I know it's dangerous taking 24.900 from a syntactician, because I keep trying to change the subject back to syntax. So now we're going to talk about syntax some more. This is a bit of syntax that we talked about. I tried to convince you that sentences like this, sentences with "seem" and an infinitive in them, the subject of the whole thing started out in the embedded clause. One of the kinds of arguments had to do with properties of idioms, that you could say things like "The shit seems to have hit the fan," where we think that the idiom is "The shit hit the fan," that "the shit" needs to be somewhere near "hit the fan," but it's way over there, and the story was, yeah, that's because it starts out in the embedded clause, and it undergoes NP movement into the matrix clause. So it moves, it raises. That sound familiar at all? Have you all expunged the terrible memory of syntax? That's the way we were talking before. So we said a bit ago that QR is clause-bound. There we go. That's us saying the QR is clause-bound. "The guard said that I should stand in front of every building." I said yeah. Why isn't that ambiguous? Well, it's not ambiguous because "every building" is inside an embedded clause that doesn't contain "a guard." So QR can't take "every building" and move it past "a guard." And then I think I also said, if we're saying things like that, we eventually will have to explain what kinds of things-- what we mean by clauses, exactly. So clause has to include things like that-- I should stand in front of every building. What about a sentence like "A guard seems to be standing in front of every building"? Well, here we get ambiguity. And it could be that we're getting ambiguity here because, well, "every building" has the option of undergoing QR here. The chalk I'm using is very small. I'm blaming my illegible handwriting on the poor quality of my chalk. It's foolish of me to pick up a new piece of chalk, because that will blow that excuse out of the water in just a second. So here's "a guard" in the end. It could be that there's ambiguity in "a guard seems to be standing in front of every building" because "every building" can undergo QR up to here to a position above "every guard." If that's possible, then that would be telling us that well this TP down here, "to be standing in front of every building" doesn't count as a clause that you can QR out of that. So when we say that QR is clause-bound, we don't mean this kind of thing. We mean tensed clauses. But another possibility is that we do actually mean tensed clauses and that the reason we do actually need every TP, that QR out of this TP is impossible, and that it's not possible to QR up to here, it's only possible to QR here. And that the reason that you get ambiguity in "A guard seems to be standing in front of every building" goes like this-- "every building" can QR to a position in the periphery of that embedded clause, and "a guard" started out in that embedded clause. And that's enough. This would be a story where we're already-- if we're doing QR at all, we're slowly, hopefully, getting accustomed to the idea that part of interpreting sentences will be interpreting things in places where we cannot necessarily see them. So we can take things and move them invisibly to other places and interpret them there. So in "A guard seems to be standing in front of every building," there are two invisible movements going on. One is we are invisibly moving "every building," maybe not all the way up to here, but just to the edge of the embedded clause, the embedded TP. And we are invisibly taking "a guard," which started out in the embedded subject position and moved up to here and underwent NP movement, and we're putting it back. So we end up with "every building" above "a guard." And the reason to take that-- you're stunned. I don't blame you for being stunned. So there are these two invisible things happening. A reason to take that idea seriously is the fact at the bottom of this. So "I seem to a guard to be standing in front of every building--" we can get a story about why we're not getting ambiguity here if we're willing to say no, QR can't, in fact, get out of an infinitival clause like "to be standing in front of every building." It can only get to the edge of such a clause. And what's special about this example is that "a guard" is not inside the embedded clause "to be standing in front of every building." It's an argument of "seem." So it's in the higher clause. I don't know why I did this with a blackboard. I think I have slides here. So let me show you the slides. Maybe they will help. So we think that "a guard seems to be standing in front of every building" involves a step of movement. "A guard" is moving out of the embedded clause and into the matrix subject position. And I see now I don't actually have slides that help with any of this. So the reason we're getting this ambiguity is not that "every building" can QR past the higher position of "a guard." It can only QR past the lower position of "a guard," and that's good enough. So we get to interpret the lower position of "a guard" as well as the higher invisible position of "every building." So we get a handle on all of these facts as long as we're willing to posit these particular invisible kinds of movements. Learning something about QR-- when we say it's clause-bound, we mean it can't get any further than TP, any kind of TP, tensed or nontensed. Now Danny Fox also discovered something else that's cool about QR. He's discovered many cool things. And the next thing is quite long and complicated, so maybe I will pause here and take questions. Do people have questions about this? Would you like me to go through it more slowly or carefully? I've been talking so far as QR is always optional. Danny Fox has discovered-- it was one of his early discoveries-- it's actually more complicated than that. What we'll see, and we'll start doing this next time, is that you can only do QR if it's going to change the meaning of the sentence. So if you have two qualifiers, you can QR the lower quantifier past the higher quantifier if that's going to give you a different meaning, a meaning you wouldn't have had if the QR hadn't happened. If you have a sentence where QR wouldn't affect the meaning, it can't happen. And what we'll do next time-- we'll start with this next time-- we'll develop the tools to discover that, the tools that Danny Fox discovered, and then I'll show you that that's true. So any further questions about any of this? At this point, you should all be weirded out. Suddenly things are moving around in an invisible way, and this is part of interpreting them. Yeah? AUDIENCE: If QR are can only happen if it will change the meaning of the sentence, that indicates that the English language, and I imagine other languages that allow this, they build in the ambiguity as opposed to trying to mitigate the ambiguity? NORVIN RICHARDS: Oh, I see what you mean. I think I see what you mean. I mean, the claim in the end is that the structures that are interpreted are never ambiguous. You either do QR or you don't. And if you do QR, then you get one reading. And if you don't do QR, then you get another reading. And English is one of the languages in which you don't get to see QR, and the result is that the sentence is ambiguous because you can't tell whether QR has happened or not. There are languages out there, like Hungarian, where you can see QR, and so the sentences are not ambiguous. But yes, that means there are certain kinds of ambiguities in English that don't exist in some other languages, like Hungarian. There are other languages where it's more complicated than that. So there are languages like Japanese, and German, actually, has a version of this too. But in Japanese, if you have two qualifiers, their scope with respect to each other is unambiguous unless you move one of them past the other, and then it becomes ambiguous. So basically, it's as though the movement operation that moves one quantifier past the other could be QR or it could be something else. And so you can't tell whether you have done QR of that thing or not. But there is no invisible QR in Japanese-- something like that. Cool. All right, good. Thank you for coming in braving my possible cold viruses. I apologize for exposing you to them. And I will see you on Thursday. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_26_Signed_Languages.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: Signed languages-- there are a bunch of signed languages. I've listed three of them here, but there are many. And maybe the first thing to say about them, I can remember a time when I believed, as a child, that signed languages were basically just codes-- coded versions of spoken languages. When I was a kid, you were sometimes taught ASL signs for things every so often in class, sort of the way you would be taught the occasional Spanish word in class or whatever. It's like, here's the word in a foreign language. Let's all learn this word. Unlike Spanish, the impression I was given by my teachers-- who may actually have believed this-- was that the way you spoke ASL was to speak English, but to use your hands. So like every English word had an ASL translation. And there is something like that. It's called Signed Exact English. But it is not the standard language used by the deaf. The deaf use something called American Sign Language in America. Another way to make this point may be that signed languages are not just coded versions of the spoken languages that are around them, is that American-- to point out that American Sign Language is fairly closely related to French Sign Language, which I won't try to pronounce in French because I don't speak French, for historical reasons. The first schools for the deaf in America were established by people from France who had established schools for the deaf in France and began establishing them here. So American Sign Language is a descendant of French Sign Language, and it is unrelated to British Sign Language and quite different from it. So go ahead and make all the jokes you want about American English and British English being different, but they're more similar to each other than either is to French. But American Sign Language is more similar to French Sign Language than it is to British Sign Language because American Sign Language is not English. It's another language, one which is spoken with your body, especially your hands. Yes, I just said that. Thank you. Another way to make the same point, there are big grammatical differences between ASL and English, some of which we'll be exploring. Here's one. In English has overt wh-movement. Remember wh-movement? So in "Who did John see?" Lots of other languages, like Mandarin, don't. And in this regard, ASL is more like Mandarin than it is like English. It's actually a blend. You can do either one. But there's a very natural way to ask wh-questions in ASL, which looks something like this. Here's a guy who is about to say, I think, "Who did John see?" "John"- "saw"- "who?" Yeah, so I'll get him to sign that again. So "John saw who?" And "who" is signed like this. That's the ASL for "who." There's another thing he's doing, which we'll talk about. Part of signing a question in ASL is not just what you do with your hands, but what you do with your face. You're required to look puzzled as you ask these particular kinds of questions-- and so to tilt your head and furrow your brow. And there's some very careful work on where, exactly, you're required to tilt your head and furrow your brow. It turns out to be very interesting. It isn't just you do that for the whole question. You do that for certain parts of the question. And linguists who work on signed languages do work on what's going on. I'll put a link to this. This is a corpus of American Sign Language utterances by the folks over there at BU who have a center for studying ASL. There's a lot of really interesting data in here, which is tagged, as you can see. Anyway, I'll put a link to this on our website. So this is Carol Neidle, who's done a lot of really interesting work on ASL. Back into full screen mode. I'm sorry, I'm going to do a lot of this because we're going to be going back and forth between videos and my slides. And I am low tech. If I were higher tech, I would have figured out a way to incorporate the videos into the slides, but that's why this is the linguistics department and not some other department. OK, so ASL is not English. It's not English because it's not all that closely related to British sign language, and it's also not English because, well, it has wh- [INAUDIBLE] too, like, English standard way to ask "Who did John see?," the word order is, "John saw who?" That's not to say that sign languages aren't in some kind of contact with spoken languages. So typically, if you're a deaf person growing up in America, you have some kind of relationship with ASL. What kind of relationship depends on the circumstances. So it's not at all uncommon for deaf children to be the deaf children of hearing parents, and then their parents have to do some hustling and be careful to make sure that their children have a chance to learn ASL. Some children of hearing parents will try to learn ASL themselves and raise their children in ASL. But, of course that's quite difficult, to learn quickly a language like ASL. I say "quickly" because if you didn't know ASL before you realized that you had a deaf child, well, then, you need to hurry up and learn it fairly fast. And that's a difficult position for parents to be in. So there are some barriers with respect to how much resources these people have, whether they can do that in time for their kids to be exposed to ASL the way you need to be exposed to your first language. They're the deaf kids who-- and then, there are deaf kids who are lucky enough to be around native ASL signers from a very early age. Now, the upshot of all this is that if you're deaf and you're growing up in the United States, you probably have some kind of relation to ASL. Maybe you become a fluent speaker. That's one hope. But you also have some kind of relationship with English. You're surrounded by people who are communicating with each other in English. And so there are what you could think of as borrowings between gestures of hearing people around you and the gestures that are used in the sign. Maybe one of the clearest examples of this, the pronoun for "I" in American Sign Language is this. You point at your own chest. The pronoun for "I" in Japanese Sign Language is this. You point at your own nose. That's borrowing from the standard way to point to yourself in these places. So in Japan, and a lot of Asia, I think, the way to point to yourself is to point at your own nose, whereas in America, the standard way to point to yourself is to point at your chest. So we could think of this as borrowing, if we wanted to, some kind of contact between the spoken language and the sign language. From very early on-- one of the first days, I think-- we talked about Saussure and the idea of the arbitrariness of the sign, the observation that the fact that this is called a "table" is an arbitrary fact. There's nothing about it that demands that it be called a table. It could have been called "chair", and the fact that it's called a table is an accident. The fact that we call it a table in English but that in another language it could be called a "mesa," or a [NON-ENGLISH],, or a bunch of other things. Yeah, languages get to arbitrarily decide what they're going to call things. Even in spoken languages-- I think we said this before-- there are places where the way a word is pronounced has some kind of connection with what it refers to. The classic example of this is names for birds, which are not always, but often imitations of the call of the bird. So a little chickadee is named after the thing that a chickadee says. It sounds like it's saying "chickadee," if you listen to a chickadee. Or Passamaquoddy, the word for "Great Horned Owl," is [NON-ENGLISH],, which is a pretty good imitation of an owl. So there are some things like that where people seem to be imitating the sounds made by something. In ASL, of course, your sounds are not relevant. In a sense, you have a lot of chances to have non-arbitrary signs. So there are signs that really do look like you're imitating the thing, trying to make an image of the thing. So here's "book." That's "book." You can see why that's the word for "book." Or this is "tree," and you can see why that's the word for "tree." It's a picture of a tree. This is the kind of thing that gets people to think that ASL is basically a very sophisticated version of charades, that people are just trying to imitate things, and then you're done. But of course, that's not it. Signs are just as arbitrary in ASL as they are in any spoken language. That is to say they're mostly arbitrary, with some other cases like "chickadee" for English or "book" for ASL, cases where it's possible to imitate something. There's a fair amount of work on the phonology of signed languages-- which, when I say it that way, sounds like a bad joke. I mean, what does that mean, the phonology of sign languages? But it's actually, there's this really cool stuff that people have observed about, if you think of phonology as being the study of how basic units can be combined to make slightly larger units-- so OK, for spoken languages, those units are sounds-- and we make all kinds of observations about the rules for how those things get to combine. And cast your mind back to when I was showing you some Lardil data, and I said, we convinced ourselves that it was useful to think that the accusative forms were a more reliable guide to the underlying forms than the nominative forms were. That's what we ended up saying for Lardil. So we wanted to say, -in is being added to the underlying form of the noun, which for "woman" is "bidngen" and for "fish" is "yak." And if you ask a Lardil speaker, what's your word for fish, they're not going to say "yak." They're going to say "yaga." But that's because, although the Lardil word for fish is underlyingly "yak," Lardil also has a basic rule-- a bunch of rules-- that lengthen monosyllabic words. And we talked about various rules of this kind. They add different things depending on what the monosyllabic word ends in, what sound it ends in, with the upshot that "yak" is not a possible Lardil word because it's too short. And Lardil, therefore, lengthens it if you haven't added anything to it. So if you've made it accusative, you're fine. It's two syllables long, "yaga," and you're all done. But if you wanted it to be nominative, you weren't going to add any suffixes to it, well, then, you need to add another vowel because it's too short. That was the picture of Lardil that we ended up with. Does that sound even faintly familiar? That's what we were talking about. That's a story about Lardil that says Lardil is like many languages in having what's called a minimal word requirement. That is, in Lardil, it says Lardil words can't be just one syllable long. If you have a word that's going to end up being one syllable long, you do something to it to make it longer. ASL has also been argued to have a minimal word requirement, and the argument goes like this. There are signs that, if they're by themselves, they involve a movement which is often short. But then, that movement goes away if they're made part of a larger word. So here's the ASL for "think." You take your finger, you move it towards your temple, and touch yourself in the temple briefly. Yeah, that's "think" and [INAUDIBLE] "I think." And here is "shocked." So "shocked" starts with your finger at your temple, where it would be for "think," and then it has a second component, "shocked." "I'm shocked." Yeah, so "think," one way to think about "think" is just having your hand in a position. That's too small. That's like a monosyllable in Lardil. And so you add something. You add emotion. Your hand moves towards your temple. But if you're adding, if you're making "shocked," which is "think" plus something else-- that's kind of like the accusative of "fish"-- then it's OK to just start with your hand at your temple and then do the second half. You don't have to add that motion to your hand. And the suggestion that's been made in the literature on ASL phonology is this is like a minimal word requirement. So Diane Brentari has suggested ASL signs have a minimal word requirement. She doesn't say "like Lardil." I forget what minimal word language she's talking about. So they have to contain at least one move, is her idea. "Think" Is the ASL version of the Lardil word for "fish." Remember Polish obstruents? I like this class because I think I promised you that I was never, ever again going to talk about Polish, and so I apologize. It's the last day. You don't have to hear any more about Polish. Sorry, go ahead. AUDIENCE: I just have a question. NORVIN RICHARDS: Yeah. AUDIENCE: Are there other words that have "think" something? NORVIN RICHARDS: Oh, dear, now we get beyond my very modest knowledge about ASL. I don't know. Are you thinking that might be the only one? AUDIENCE: Well, I think, again, if you, I guess, look at compound words, there is definitely the ability to pair them with other things. So I was just wondering if that's a common way to just show surprise, maybe confusion. NORVIN RICHARDS: Oh, you mean having this as part of a larger sign? Yeah, so Brentari's discussion certainly makes it sound like there are a bunch of these. "Shocked" is the examples she uses, and I'm afraid I don't know any other examples off the top of my head. I'll try to find the paper that I got this from, and I can put it on the website so you can take a look. So here I am signing things at you, yeah. Polish-- last time, I promise, we will talk about Polish. Polish obstruents, so stops and fricatives-- we convinced ourselves that Polish has what's called final voicing, cross-linguistically extraordinarily common, a phenomenon in which, if you have a voiced obstruent at the end of a word, it becomes voiceless. So if you look at the plurals of these nouns, then you can see that before the plural suffix "i", you can have both voiced and voiceless stops and fricatives-- in the last example, "nose" and "rubble," plural of "rubble," which, hopefully, none of us will meet any time soon. But that if you don't have the plural, all of these obstruents are voiceless if they're at the end of a word because Polish has final devoicing. This was part of the argument that we need to posit an underlying representation for a word like "lye," let's say-- the second line there-- which ends in a voiced stop, because that gives us the best-- well, the best match on the data. So it allows us to account for the fact that the plural of "bow" and "lye" look different from each other-- they're "wuki" and "wugi"-- even though the singulars look the same. So this is what we were doing back when we were doing Polish, Yeah? Have I triggered your Polish flashbacks? OK, so it has voiced obstruents, but they're limited in where they can go. They can't go at the ends of words. Here's an observation about the distribution of finger wiggling in the ASL sign. So sometimes when you make an ASL sign, you wiggle your fingers. There are ASL signs in which there isn't much movement, or just one of Brentari's shortened ones, like the word for "color," which is this. So you put your hand to your chin, and you wiggle your fingers. There are also signs where your hands are moving and the wiggling is happening during the move, like go up in flames, where your hands are rising and your fingers are wiggling. That's how you do go up in flames. So ASL signs can have finger wiggling in them, but there are imaginable signs that you don't get. So you could imagine a sign where your fingers would wiggle, and then you would move. It would be like go up in flames, except you would wiggle first before you began moving-- or in which you moved and then, having reached your destination, you did a long move-- not one of your short moves, but a long move-- did your move, and then wiggled your fingers. But there aren't any signs like that. So finger wiggling is either confined to the position at the end of one of Brentari's short movements, or if your hands are moving over a longer distance like in "go up in flames," well, you have to wiggle the whole time. Similar observation about hand shape change. So there are signs where your hands just do a short move and then change their hand shape, like "understand," where your hand moves into position and then, put up one finger, "understand," or "old", where your hand starts at your chin and moves as it moves away from your chin-- "old." Yeah? AUDIENCE: You're talking about other mental concepts being expressed in sign language and understand-- so it also starts at the temple? NORVIN RICHARDS: Yeah, so I think, right, I think there are a number of-- not surprising, maybe a number of mental. So "forget" is another one. This is "forget," where you wipe something off of your brain, yeah, using your middle finger. Your middle finger is also used for being sick. So yeah, there are a bunch of signs, that use your head that have to do with what's going on in your head. Yeah, that's right. That's right. OK, so ASL has finger wiggling, but there are limits on where finger wiggling can go. It goes during the motion, or it goes right after one of Brentari's short movements. Hand shape change is the same. It goes after one of Brentari's short movements, like in "understand," or it goes during the movement, like in "old." Yeah, but there aren't-- did I say this in the next slide? Yeah, again, you can imagine a sign where-- which would be like "old" where you would start with one hand shape, you change shapes, and then move, or where you would move along distance and then change shapes, but there aren't any signs like that. And so people have suggested, yeah, just like a phonologist has to be able to say things like, this is a language that has voiced obstruents but not at the ends of words-- there are limits on where they can go-- ASL has finger wiggling, and it has hand shape change, but it has rules about where they can go in the sign. And then, of course, you want to know why-- like why, what constrains these to where they go. But when I say that there's work on ASL phonology, , this is what I mean-- there's people trying to figure out what makes something a possible well-formed ASL sign. It turns out there are limits on this. One observation, actually, that people have made as they do this work, there are compounds or polymorphemic signs, which you can tell they're polymorphemic because they appear to violate these rules. So I just said, there aren't any signs where you first finger wiggle and then move. Here's an apparent counterexample to that. There's a word "hypnotist," which goes like this. But that's a two-morpheme sign. So it's "hypnotize"-- which is a Brentari short movement plus finger wiggling-- and then there's a suffix, which is the agentive suffix. You use it to make-- you attach it to nouns to make people who do that now. So it's a hypnotist. So "hypnotist" by itself looks like you're finger wiggling and then moving. But that's because there's a morpheme boundary between those two things. So within a morpheme, you can think of it as a bisyllabic word if you want. So in the first syllable, you're doing finger wiggling the same way you're doing finger wiggling in color. And then there's a second move, which involves moving your hands. ASL has one-handed signs, like "understand," and it also has two-handed signs, like "go up in flames." There's an interesting restriction in two-handed signs on the second hand. So the second hand either has to stay still, like in tree, where one hand moves and the other hand just stays there in the background for the first sign, or it does the same thing as the other hand. So "teach," both your hands are doing the same thing. I love the sign for "teach." I want to do this when I teach now. Or bicycle, where your hands are doing the same thing, but they're out of phase with each other. So they're both going in circles. Which is kind of interesting because-- so and that's it, right? So there aren't ASL signs in which one hand is doing one thing and the other hand, is doing a completely different thing. And it's not because-- does that make sense? So your hands can't both be moving and doing different things. Which is interesting, because it is actually physically possible to violate this, to have your two hands both moving and doing different things, in poetry. So there's a video which I'll try to show you now of an ASL poet, guy named Clayton Valli, who wrote many poems in ASL. There are a lot of YouTube videos of him performing his poems. And this one is called "Snowflake." And what he does in it-- the reason I'm going to show you the video is that I can't physically do it very well. I'll show you what he does. But I'll tell you what he's about to do, just so you can see it. With one hand, he's doing the sign for "snowflake"-- which is a snowflake falling. And, then with the other hand, he's signing one-handed versions of-- I may not get the order right-- white, cold, beautiful. And he's doing this as this hand is moving. So this hand is moving, and with the other hand, he's going, white, cold, beautiful. And as you can see, I can't physically do this, but he can. Let me show you him doing it. Let me get it. Where is he? Yeah, so here's Clayton Valli. And I think I paused it in more or less the right place, and we have to watch this for a little bit. So he's about to sign about looking out the window at a snowy day. The snow is coming down. And now I think it's about to come. Here's the snow piled up. Come on, Clayton. Yeah, here it is. There's the snowflake falling, and white, beautiful, cold, yeah, with his other hand. So it's physically possible to do that, at least if you're Clayton Valli, if you're a fluent ASL signer. But there aren't any ASL signs that do it. It's one of those rules that you break in order to perform poetry. So I'll put a link to that. I'll put link to that poem on the website so you can see. Yeah? AUDIENCE: I feel like that might just be because most people aren't capable of doing that, like do a circle with one hand and a square with the other. Most people can't do that. NORVIN RICHARDS: So I think it's one of those things that probably requires practice, right? And question, is this something that any ASL signer could just do cold without any trouble? If they saw the Clayton Valli video, they'd be like, oh, yeah, and then they would do it, and I can't do it because I'm not a fluent ASL signer? Or whether it's because Clayton Valli practiced in his room hour after hour and eventually got to where he could get his hands to do that? Yeah, right. So I think you're right. It's not exactly that. It's not surprising that ASL isn't full of signs that are like this, with your two hands doing two different things. But it's interesting that it is physically possible to learn to do it. Yeah? AUDIENCE: I think that it's in contrast to the signing rule. It's one of the first things you learn in an ASL class. And I believe that is where you should focus all your attention, on the dominant hand. NORVIN RICHARDS: Yeah, yeah, yeah. So I mean, you're absolutely right. What happens is that the non-dominant hand either copies the dominant hand or just sits still, like in "tree." And so you think this is something somebody deliberately did? They decided that ASL should work this way? It'd be interesting to see. I have no idea. I'm doing all this talk about ASL. I started by saying, there are many sign languages. I don't know whether this is true in every sign language. I haven't heard of a sign language in which your hands are allowed to move independently of each other. So it'd be interesting to see how universal this is. Yeah? Is there a left-handed versus right-handed version of ASL? No, I mean if, so I'm right-handed, and so this is my version of "tree." If I were left-handed this would be my version of "tree." And they mean the same thing, as far as I can tell. Similarly, for the signs that use both hands-- I mean, this is a consequence of these facts about the signs that we've been talking about-- for the signs that use both hands, like "teach," if I'm-- somebody described this to me as the equivalent of talking with your mouth full-- if I'm busy with my right hand, if I'm holding something or whatever, then I can sign "teach" just with my other hand, and you don't lose anything, in a sense. You might if there's also a sign that's just like this. And Valli, actually, in this video, is doing something like that. The sign for "cold" uses both hands. They're both moving. But because his right hand is busy doing "snowflake," he does a one-handed version of "cold," yeah. Yeah, OK, and I alluded to this a second ago. There's a lot of work on the use of parts of the body other than your hands, so your non-manual component. There's facial expressions that are involved in doing certain kinds of things. So as I said, in wh-questions in ASL, wh-questions require you to furrow your brow and look puzzled. And there's all this interesting work on where, exactly, you're required to do that. As I said, ASL is a wh-in-situ language, or it has the option of wh in situ. And I do have chalk today, which is nice. So apparently, what that means is that if you're doing the ASL for "You gave what to the teacher?" where just pretend that all of this is ASL, you're required to look puzzled here-- that is, from the beginning of the question until you get to the wh word. And then you're allowed to relax your face. And if this is an embedded question like, I don't know-- so if I'm saying the ASL for, "I don't know what you gave to the teacher"-- well, then, this is still true. You sign "I don't know" without doing a wh- facial expression, but you do one for this part. So from the beginning of the question to the wh- word, you're required to look puzzled. Just kind of cool there's this work on this. Or similarly, there's a negation in facial expression in ASL. And so negation-- ASL has a word for "not." It's this. And so if I want to sign-- oh, sorry. AUDIENCE: So does that mean that the facial expressions we make when [INAUDIBLE] are related to [INAUDIBLE] face and interaction with others? Because I would think it's something that's unique, like when you're confused, to make an expression with your face. Is it just related to the fact that you're talking with someone you want to understand you're confused. Or is it [INAUDIBLE]? NORVIN RICHARDS: These are all really interesting questions to which I don't know the answers. One possibility is that the puzzled facial expression, the wh- facial expression in ASL, it's a little bit the two pronouns for "I" that I was talking about earlier-- that this is "I" in ASL, and this is "I" in Japanese Sign Language, and that maybe that's connected to gestures that hearing people make that speakers of sign languages are using in a different way. So you're right, I might, if I were asking you a wh- question, look puzzled. I don't have to look puzzled to ask you a wh-question, and there are no particular restrictions on when I should look puzzled-- where during my question I should look puzzled. But in ASL, there are. You actually have to do this, and you have to do it at the right points, in the right points of the wh- question. So their use of facial expressions is a little more sophisticated, I guess, or rule-governed, than they have been shown to be, anyway, for speaking people. Now, maybe all this shows is that there hasn't been enough work on facial expressions for spoken languages. We're all obsessed with the sounds, but for ASL, there's all this clever work on what you do with the rest of your body. Yeah, good question. Similarly, there's a negation facial expression. So if I want to say, "I'm not deaf," I can sign, "I," "not," and "deaf." or I can point at myself and shake my head as I sign "deaf." So I don't have to use the sign for "not" at all. By shaking my head, I convey negation without using the negation sign. So there are facial expressions for both of these things. I don't know what happens when they combine because, presumably, you can ask negative questions. I guess you can do all of these things with your head at once. One of the other properties of signed languages which is cool and there's a lot of work on it is the use of loci in space to refer. So an English sentence like "Trump told Biden that he would win" is ambiguous. So "he" could refer to Trump, or to Biden, or to anybody else, in principle. It has to refer to some male person. It could refer to either of these people. If you were going to sign this in ASL-- and I'm not going to. I don't know enough ASL, but I can tell you the basic ideas about how this would work-- first of all, you would need what are called "name signs." So you would need to sign the names "Trump" and "Biden". The simple way to do that is by finger spelling. So there are correspondents to all of the letters. You can sign all of the letters of the English alphabet. So you could sign T-R-U-M-P, only you'd do it faster if you knew ASL. But there are also what are called "name signs," which are signs that are just signs for particular people-- typically famous people, or people who are deaf, I think, generally end up with name signs. Name signs are often based-- they often contain the first letter of the person's name. I had a classmate in grad school who was deaf. He's now a professor at Gallaudet University. His name was Gaurav in English, and his ASL sign involved the letter G, which is the first letter of his name, put on the chest. Sorry, so that's Gaurav in ASL. The name sign for "Noam Chomsky," which is one of the few other signs I know, is this. So it's a C. When I first saw it, I was like, what, they think he's an alcoholic? What is the deal? It turns out to be based on the sign for "God." So it's "God" with a C. [LAUGHTER] It's Noam Chomsky. It was invented by someone with the proper level of respect for Noam Chomsky. The name of sign for Trump, there are apparently various name signs for Trump. One of them is this-- [LAUGHTER] Or this, that's another one. For Biden, I was looking, and apparently, there isn't general agreement on what the name sign for Biden ought to be. One of them is this, which is apparently supposed to be a gesture that indicates that he likes to wear sunglasses, I guess. It's connected to the fact that he has sunglasses a lot. By now, there must be a name sign for Biden because, of course, he's surrounded by interpreters when he's talking, you hope. But anyway, if you wanted to say-- bless you-- in ASL, "Trump told Biden that he would win," you'd sign Trump, and you'd sign Biden, and you would put them in places. So you'd say "Trump," and you'd put Trump here, and Biden, put Biden here. And then you would say, he told him that he-- and then you would either point at Trump, or you would point at Biden-- would win. So the ASL versions of the sentence are not ambiguous. So you make it clear. You've either pointed at Trump, or you've pointed at Biden. So he always refers to someone in particular. Or, if what I want to convey to you is "Trump told Biden that Obama would win," well, then, I need to sign "Biden," "Obama," and put "Obama" somewhere else. You can have many, many people in front of you in space. I believe people who have studied this have not found limits on how much space, how much subdividing of space you can do in front of you. I don't know how hard they've tried, but yeah, it's certainly not just two. You can have lots of pronouns in front of you. There are even fancier things you can do. I was just reading a paper about this. This is kind of astonishing. So if I want to say something in ASL like "Most of my students came to class," what I do is I start by signing "my students," and then I put my students in a circle. So I have my students, and then I establish a circle that all of my students are in. And then I'm going to make the sign for "most," which I think is something like this. And then I describe a smaller circle that's part of the circle that I've just described for you. And then I would sign, "went to class." So I make my circle that has all the students, And then I have a subset of that that's-- I draw the kinds of Venn diagram-y things we were talking about before. Here are the students. And the cool thing about this is now, what I've got in front of me is a space that's got my students, and it's subdivided into a larger space that's got most of my students, the ones who came to class, and a smaller space that's got the students who didn't come to class. So the next sentence, apparently, can be "They stayed home," pointing at the ones who didn't come to class. In English, if I say "Most of my students came to class, they stayed home," I'm contradicting myself. And there's something interesting going on here. When we were talking about pronouns, we just breezed by them, right? Pronouns, well, they refer to people who are salient or whatever. Certainly, if I pass a bunch of students on the Infinite Corridor, I could remark to you, "They're on their way to class." We don't have to have been talking about them before. They're just there in the world. I get to assume that you're thinking about them. But if I say "Most of my students came to class," well, that-- I mean, I'm telling you that they're my students, and that most of them came to class, and then there were the ones who didn't, right? I mean, that's what that means. But I still don't get to refer to the ones who didn't come to class with "they." "They" has to refer to the ones who did come to class. But in ASL, I get to point at them. And so I get to refer to them right away. Joseph? AUDIENCE: So the "Most of my students came to class." NORVIN RICHARDS: Yeah. AUDIENCE: Because [INAUDIBLE] the quantifier "most," you're describing a subset. But doesn't it conservatively mean you get to ignore the [INAUDIBLE]? NORVIN RICHARDS: So if we didn't know about ASL, I might have said, why don't I get to refer to that other set? And you might have said, ah, it's because of the conservativity of "most." Yeah, that would have been a pretty plausible thing to say, and it's maybe even right. But the fact that in ASL you can refer to them with a pronoun suggests that maybe it's actually something else going on. There's all kinds of stuff like this in spoken languages, by the way. It's not just about modifiers. And so like-- old observation-- if I say, John has a child; she's five-- that's fine. And "she," you probably figure it's the child I'm talking about. If I say, "John is a parent; she's five," that's a weird thing for me to say, even though by saying, "John is a parent," I'm telling you that John has a child, right? That's what that means, but it doesn't help. [LAUGHS] So we're limited in how we can use pronouns. I'm sorry. You had a point you wanted to make. AUDIENCE: Is there any finer distinction, like with [INAUDIBLE] say, I get most of my students who don't come to class because-- NORVIN RICHARDS: I believe you can, then, draw subparts of these. I'll put that paper on the website, too. You can-- AUDIENCE: [INAUDIBLE]. NORVIN RICHARDS: Yeah, there's all kinds of stuff that you can do with space. Again, I don't know what the limits of it are, but it would be really interesting to find out. Yeah, they can do more referring to sets and subsets than we can do because they have the advantage that they've got a blackboard. They get to point out where everything is. Yeah? AUDIENCE: OK, so that seems pretty different from spoken English. NORVIN RICHARDS: Yeah. AUDIENCE: [INAUDIBLE] shapes that individual's perception of the world into certain [INAUDIBLE].. But is their written form of communication very different from a non-deaf individual's? Or are they similar? NORVIN RICHARDS: Oh, OK. So there have been various attempts to come up with writing systems that are indigenous to ASL, and I don't think any of them have gained a lot of popularity. So when linguists are writing papers about ASL, what they typically do is-- I mean, I did a parody of it over here-- that if you want to write an example sentence that's like, "I saw a tree," for "tree," you're going to write-- for this sign, you're going to write the English word "tree" in all capital letters. So you use all capital letters to indicate that you're talking about signs. And then, they'll annotate these, if you're interested in indicating things about facial position, and facial expression, and things like that, there are ways of annotating that to show you're doing the wh- facial expression or whatever. As far as actual ASL speakers go, my understanding is that they typically don't write in ASL. So dictionaries, you can find dictionaries of ASL online. They have lots of videos, or they'll have pictures of one frame after another of-- so there isn't-- literacy among the deaf is apparently a really serious problem, as you might imagine, right? So somebody who's profoundly deaf, who hasn't heard a sound-- learning to spell if you can hear is hard enough. Learning to spell if all of the symbols that you're being asked to write just don't represent anything for you at all, it's going to be a nightmare. And so although ASL native signers typically write in English, it's apparently very hard, as you might imagine. Yeah. Yeah? Points about this? OK. Oh, OK, so yeah, this has all been about space and other things to do with your body. I wanted to show you one other video. So there's another phenomenon that people use when they are telling stories or talking about multiple people, which is what's called "role shift"-- that's what people who work on it call it-- where you'll shift your body. People do this in spoken language, too, but again, in ASL, it's apparently more or less obligatory. You'll shift your body to represent the different characters that you're talking about. And I'll show you a story where somebody does this. It is a story about throwing a stick for a dog, and one of the nice things about it is that there's no need at all to explain what the guy is talking about. Maybe at the beginning a little bit. He's going for a walk with his dog. No, stop that! OK, so there's his dog, right? [LAUGHS] And there's him. [LAUGHS] Yeah, so he's doing role shift. He's going to get a stick. Yeah, and there's the dog again. The dog's all excited. There's the guy. He throws the stick. Phew, clunk. And here comes the dog. And so-- [LAUGHTER] I'm showing this to you partly because it's a cute story-- "Thank you," he says. I'm showing that to you partly because it's a cute story, but also because it's a good example of role shift. So he's-- oh, there's Clayton Valli again-- a good example of role shift. So he's using his body to go back and forth between being the dog and being the person. And apparently, this is a standard thing that you do if you do ASL, if you're a native speaker of ASL. It's part of being a fluent signer as opposed to being someone who knows a bit of ASL. Full screen mode, that's what I want. OK. So this was what I wanted to try to show you about ASL. So just to review, ASL, American Sign Language, it's a language. It's a language spoken here in America by many deaf people. It is not English. It's not any form of English. It's not charades, and it's not English with all of the words replaced by signs, which is what I believed when I was a kid. It's a language that's quite different from English in various ways, as I've tried to show you. There's a lot of interesting work on ASL. A lot of it is on ASL phonology, where by "phonology," we mean what we always meant by phonology, I guess, which is the rules for how the articulations work. So phonology up until now has all been about what you're doing with your vocal tract and what the rules are for what your vocal tract is allowed to do here or there in a word. There is an ASL version of this where people try to understand what are the various articulations your fingers, your hands, your wrists, the rest of your body, what are those parts of you allowed to do at different points in the sign? And then, as I said, there's a lot of interest in the use of parts of your body other than your hands. Lots of work on facial expression and on role shift, shifting your shifting your body back and forth. And that's what I wanted to tell you about ASL. Any questions about any of that? Then we are done. You officially know everything about 24.900. So 24.900, you know everything about linguistics. [LAUGHTER] No, actually, that's false. But one of the main goals of this class-- this class has had several goals, but one of the main goals of this class has been to convince you that there are some interesting mysteries having to do with how language works and that, if you take more linguistics classes, you might get to learn more about how those mysteries work. If you take even more linguistics classes, then you could be the person who solves these mysteries for the rest of us, which would be great. One of the nice things about linguistics-- I think I've said this before-- it's a field that's been around for a while. We have some results. But it's not all that hard, even in an intro class, to bring you right up to the edge of what we understand and invite you to look down into the abyss and ask yourself how you could build a bridge-- this metaphor is getting away from me-- how you could move further, how you could get further from where we are. So I hope that at least some of you will be inspired to want to do that because we need people to go out there and solve the mysteries that haven't been solved. Questions about any of that? All right, then, we are done. Go forth. Do your class evaluations. So we're ending early, so you'll have a good half hour to think hard about what you want to write in your class evaluations. And thanks very much. [APPLAUSE] Thank you. Thank you, thank you. Go out and enjoy the day. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_6_Phonetics_Part_2.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: Here, I'm going to try to answer that. That's a really good question. And in some cases, we just kind have to guess. But in some cases, we have-- especially for languages like Greek and Latin, we have descriptions by people who attempted to describe their own sounds. That's a kind of thing that happens. We can infer things from mistakes that people made at various points. So people who were learning how to spell, or say children, or people who weren't highly educated, when they misspelled words in, say, Greek or Latin, you can infer things. So if you're making-- if a certain kind of misspelling is common, you get to infer that those sounds were kind of common. So in the case of ancient Greek, we think that there was an "Y" [high front rounded vowel] partly because there was a period where people were-- when people misspelled it. They either misspelled it as "u" or as "i". And then later, there was a period where people reliably misspelled it as "i," leading us to think that it had shifted to what it is in modern Greek, which is an "i." That's one of the kinds of things people do. You also get to look at what happens to these words when they get borrowed into other languages, so languages that have richer or poorer systems. There's one classic example from Greek and from Latin. We know that Greek-- I see, I'm sorry, Raquel, you were right. This is going to be a long-- a long answer. We know that the Greek letter, this Greek letter, used to be pronounced as a "p" with an extra puff of air. It used to be pi or pea [pronounced with aspiration], so just an especially emphatic "p." They also had another letter, which was a "p" without the extra puff of air, "p." And then this one shifted to-- well, actually, the modern-- this is the symbol for it in IPA, the bilabial fricative, a "fuh," a "fuh." And the way that we know-- we can actually date pretty precisely when this shift happened. And that's because Greek was loaning lots of words into Latin. Latin was borrowing words from Greek, including a word that they spelled like this. And the fact that they spelled it like that, first of all, suggests that it was a "p" with an-- a "p" with an "h" sound and not something more like an "f" because Latin had an "F." And they would have used an F if they had thought that this was an "f." Also, because people who were bad at spelling-- children, and people writing graffiti, and stuff like that-- when they misspelled this word, when they were writing graffiti about philosophy, they would misspell it by leaving out the H. So they would spell it just with a P. And then later, there was a period where people who were misspelling this word would misspell it by writing it with an F. And that's how you know that the word had changed its pronunciation right then at that point from an aspirated "p," a "p" with an "h" sound, to an "f." AUDIENCE: So when I misspell things, that's actually benefiting the future. NORVIN RICHARDS: You're being-- providing valuable data for future historical linguists, absolutely. That's absolutely true. Now, please engrave your misspellings in stone. That will make them a lot easier for people to handle. Any other questions about this? OK, where was I going with all that? Oh, right, so I was saying-- I was telling you some things about typology. There are languages that have poorer and richer vowel systems. English has a particularly rich system of vowels. We have a fairly large number of vowels. There are plenty of languages out there that have-- languages that have fewer, that have five. There are plenty of languages that have three. If you have three vowels, your three vowels are pretty invariably "a," "i," and "u" That is, they're the vowels that are kind of in the corners of this system. They're at the bottom right, and the top right, and the top left, yeah. There aren't languages out there that have three vowels and the three vowels are "eh," "ih," and "ah." That's not a kind of language that we find. And there's some work on this, including classic work by my colleague Edward Fleming, talking about optimal distributions of vowels and vowel space, where the idea is have this space in your mouth. You want your vowels to be maximally contrastive. So you want people to always be able to hear the difference between different vowels. And the best way to do that is to have your vowels as far apart from each other as possible, as a theory about why we get the kinds of vowel systems that we get and not others. OK. Oh, right, and then there are languages, like French, that have nasalized vowels. So French has words like "main," [with nasalized vowel] which is hand, and "met," which is dish. So that's another option that some languages take. You can lower your velum and allow some air to flow through your nasal cavity as you're making sounds. French, Portuguese, lots of languages do this. OK, so I said we're classifying vowels. Why are we classifying vowels? Partly because it's fun, but also because it enables us to realize that we wonder about kinds of vowel spaces. Why are all the English vowels in the upper right-hand corner? Do they have to be? If you had a rounded vowel that was somewhere else, what would it sound like? All that good stuff. Another thing that it does is help us to develop theories of phonologically natural sound changes. So here are some Turkish nouns. I'm sorry, I'm going to use your native language. And I'll just ask you to be quiet for the next little while. Here are some Turkish nouns. Some of you, if you've ever read CS Lewis, you might recognize the first one. And here are the plurals of these nouns. What's the plural suffix in Turkish? Yeah? AUDIENCE: L-A-R. NORVIN RICHARDS: So it's L-A-R, [TURKISH] like in the word for lions, and the word for arms, and the word for slaves, and the word for daughters. Oh, but not in the words for winds, teeth, and roses, where we seem to get a [TURKISH].. So we get L-E-R for those three. I'll write them down-- [TURKISH] And we get [TURKISH] for these-- [TURKISH] Having done that, I'm now going to go back to the last slide. What do you think? Why are we getting [TURKISH] with these and [TURKISH] with those? Joseph, what do you think is happening? AUDIENCE: I think this is an example of vowel harmony. NORVIN RICHARDS: I think this is an example of what's called vowel harmony. What do you mean? AUDIENCE: So a lot of languages will have a certain suffix or a prefix, affix without a morpheme. And when you attach to another word, the vowel, the primary vowel in that found morpheme will change to-- will harmonize with whatever-- with some vowel in the other [? word ?] [INAUDIBLE].. NORVIN RICHARDS: So that's well said. So what does this vowel have in common with these vowels? And what does this vowel have in common with these vowels? Bear in mind that "oo" is a front high grounded vowel and "uh" is a back high unrounded vowel. What do these vowels, "ay," "ee," and "oo" all have in common? Yes? AUDIENCE: They're front. NORVIN RICHARDS: They're all front. And what do these vowels-- "ah," "oh," "oo," and "uh"-- have in common? Well, they're all back, right? And the vowel in "lar" is back and the vowel in "ler" is front. So just as Joseph said, this is vowel harmony. The vowel of this suffix is having its properties determined by the vowel of the thing it's being attached to, the noun it's being attached to. Whether it's front or back is determined by the vowel of the noun it's attached to. So this is front-back harmony. There's also rounding harmony-- so cases where you have a suffix whose roundedness is determined by the roundedness of the vowel that it's attaching to. There are other kinds of harmony. But those are things that happen. And categorizing vowels, looking at vowels and thinking, ah yes, some of these are front and others are back, it's not just a fun thing to do that allows us to think about the insides of our mouths. It allows us to understand Turkish better, right? What the heck is going on in Turkish? Well, it's harmony for front versus back. OK, I think when I was first introducing you to consonants, I said, consonants, they involve disturbing the flow of air. And the flow of air is usually going outward from your lungs. But there are some other options, I said, and chuckled mysteriously. And so I want to just show you why I was doing that. There are other kinds of sounds out there. So the first kind of sound that I wanted to talk about are what are called ejectives. And I have some sound files that I'll play for you. These were all compiled and put online by the great phoneticist Peter Ladefoged, who used to teach at UCLA. He's no longer with us. I'll put a link on the website both to the particular classes of sounds that I'm going to show you and to the larger UCLA website, which has all kinds of cool sound files on it that you can listen to and amaze and annoy your friends with. So the first type of sound that I want to talk about are what are called ejectives. There aren't languages of Europe that have ejectives. But they're quite popular in the languages of North and South America. And so it's worth learning how to do them. Let's start by making an ejective "t." This is how you make an ejective "t." So first, we'll practice like this, you'll hold your breath. Don't do it now because I have other things to talk about. But what you're going to do is you're going to hold your breath and try to make an audible "t." sound. So go t, t, t. AUDIENCE: t, t, t. NORVIN RICHARDS: OK, good. And then if you were actually doing this in speech, you're not allowed to stop and hold your breath. The idea is to get-- well, so hold your breath, make your ejective "t," and then release into a vowel, so go ah, ta, ah, ta. AUDIENCE: Ah, ta, ah, ta. NORVIN RICHARDS: Cool, you guys sound very ejective. And similarly, you can do this with bilabial. There are bilabial ejective-- ah, pa, ah, pa-- or velar ejective stops like ah, ca, ah, ca. Cool, excellent. So we're all amateurs at this, but the Lakota are professionals. So let's listen to a Lakota speaker do this. [AUDIO PLAYBACK] NORVIN RICHARDS: That's an ejective velar. [END PLAYBACK] NORVIN RICHARDS: As opposed to, here's her regular velar stop. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Here's a bilabial one. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: So that's what ejectives sound like in the wild. This is Lakota. It's a language still spoken, well, near North and South Dakota. So there are the Lakotas, and the Nakotas, and the Dakotas. They all live in that area. They're Siouan languages. Yeah? Oh, Raquel? AUDIENCE: What's the difference between that and just the normal "k" sound followed by the glottal stop? NORVIN RICHARDS: So it's very similar. You're engaging your-- so what you're doing when you make these sounds is you are making a velar closure. And then, you are also making a glottal stop. And then, you are pushing with your glottis to make the velar closure open. So you make your-- you move your larynx in order to get the closure to open. And that's what gives it this unique burst. So you're right. There is a glottal stop involved when you're doing that. We're at-- yeah, so when I said that the flow of air is usually from your lungs, here's the place where it isn't exactly. The air originally came from your lungs. Well, it originally came from the world. It spent some time in your lungs. And then, it got trapped between this glottal stop at your larynx and the closure that you're making in your mouth. And the flow of air that causes-- that's present as you're opening the stop-- is actually from your larynx. It's not from your lungs. It's not originally from your lungs, if that makes sense. So those are ejectives. Clicks. I have some recordings of clicks in here somewhere. Yeah, there they are. These are Zulu clicks. There are three main kinds of clicks to think about. And clicks, again, involve a strange and cross-- typologically unusual, kind of, airflow. So clicks are found in-- so the languages with the richest click inventories are the Khoisan languages of places like Namibia. And then, there are a bunch of Bantu languages that also have clicks, also in Africa, in theory because they've been in contact with the Khoisan languages. So clicks are really cross-linguistically not terribly popular. They're only found in that area and in one culturally restricted language spoken by the Lardil. So the Lardil have-- it's almost a language game. It's called Damin, which is used after a certain initiation ceremony. So there's a period where you're not supposed to speak normal Lardil. And so you speak this other thing instead, which has all kinds of strange sounds in it, including clicks. It's called Damin. We may get a chance to talk more about it later. But for languages that are not spoken under special cultural circumstances, these African languages, the Khoisan languages and the Bantu languages that have been in contact with them, are the only ones that have clicks. And I'm talking about clicks now because they do involve an unusual airflow. They involve making a stop, a velar stop, and then using your tongue to sharply draw air into your mouth for a moment. So what's a click? One click is the dental click. It's the, kind of, sound you make to criticize someone, to [TSK-ING] thing that you sometimes-- [LAUGHTER] --see spelled tsk. T-S-K. T-S-K. Yeah, there is a lateral click. Whenever I read about lateral clicks, the description of them is always that it's the sound you make to encourage a horse. I have never had to encourage a horse-- [LAUGHTER] --or discourage a-- I've never-- I haven't interacted much with horses. So I'm not-- that description doesn't help me a whole lot. I gather it involves sucking the sides of your tongue in from the sides of your mouth. So you're going [CLICKING]. AUDIENCE: [CLICKING] NORVIN RICHARDS: Yes, I think I hear some lateral clicks. [LAUGHTER] And then, there's another click, which is sometimes called a palatal click, which goes [CLUCKING].. AUDIENCE: [CLUCKING] NORVIN RICHARDS: So you're-- oh, you guys are really good at that one. [LAUGHING] Yeah, so all three of these clicks are present, for example, in Zulu and in Xhosa, which is the native language of Nelson Mandela. And here are some Zulu ones that you can listen to. So we've got the dental one first. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: And here's the lateral one. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: And the alveopalatal one. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: And as you can see from this, Zulu draws distinctions depending on, basically, how you release the stave. So there are clicks-- those clicks, the ones I just played for you, involve a voiceless stop that's not aspirated, that doesn't have this puff of air that I was talking about and that we will talk about more. I was talking about when I was talking about "philosophy" and Greek. But you can also build them on aspirated stops, or on voiced stops, or even on nasals. So maybe I'll just run through all of the dental clicks. Here's the voiceless one again. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: But here, it is aspirated. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Ta ga. So there's air coming out as you do this. And here it is voiced. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: And here, it is nasal. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Whoa, that hardly sounded like a click at all. Here, let me try to find another nasal one. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Yep, that was clicky. [LAUGHTER] And here it is as a lateral. [AUDIO PLAYBACK] NORVIN RICHARDS: [CHUCKLING] [END PLAYBACK] NORVIN RICHARDS: Zulu really does sound like that. [LAUGHING] We did a field methods class on Zulu. And it has this beautiful-- because it's tonal. So it has all of this-- the intonation is just really pretty to listen to. So clicks is another kind of thing, not all that popular cross-linguistically. But it happens. And they're magnificent. The Khoisan languages have even richer arrays of click inventories than this. In particular, they also have bilabial clicks. So they have [CLICKING] as a sound. [CLICKING],, that kind of a sound. And they make all of these distinctions among their clicks. The result ends up being that the Khoisan languages, if you count all of these clicks as different consonants, the Khoisan languages have just huge inventories of consonants because they have all these different clicks. Yes? AUDIENCE: Are the accents above the vowels indicative of what direction the tone is? NORVIN RICHARDS: Yeah, those are tone marks. Yeah, so the accents that points down to the right are low-tones. And the accents the point up to the right are high-tones. And that's all Zulu's got. It's got high-tones and low-tones. And then, you can have long vowels that transition from a high-tone to a low-tone or vice versa. Yeah, yep. Clicks. Implosives. Let me find you some implosives. They're around here somewhere. Where were they? These are implosives. Implosives, what implosives are, are sounds where-- so what you should do if you want to make an implosive, ah... "b," for example, is you should make your bilabial closure. And then, you should try to inhale. So it's like, "bah, bah, bah, bah" [pronounced with implosive "b"]. And similarly, so don't hurt yourself. But that's what you're trying to do. [LAUGHTER] And there are also velar implosives, which you should really be careful with. They're "ag ba, ag ba" [pronounced with implosive "g"]. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Or "ah dah, ah dah" [pronounced with implosive "d" AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yeah, [INAUDIBLE].. So here's a native speaker. So these are not all that uncommon in languages of India. There are a bunch of them in Indonesia. Here are some. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: That's a bilabial one. Here's a retroflex one. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: "Dinu" [pronounced with voiced retroflex implosive "d"]. All right. So we can compare his implosive one with the regular voiced one. So here's the implosive one again. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Yeah, and then, here it is not implosive. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Yeah, and here's what it sounds like when it's velar. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: As opposed to the regular one. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Yeah, so again, I'll put these files up. You can play with them at home, amaze and annoy your friends. No questions about any of that? So again, this has all been about interesting air flows. So here's one where the flow of air is not outward at all. It's going inward for just a millisecond, yeah, trying to suck air into your body instead of getting it out as you speak. There are-- so I don't know of any languages that have, for example, a class of vowels that you must make while inhaling. So these are consonants that you make while attempting to inhale. That's the sense in which the flow of air is different. You could imagine a language that had the vowel "a" and also the vowel [GASP]. I've never heard of a language like that. There are languages that will inhale for particular words or classes of words. So I'm told that in Finland, there's an expression that basically means "yes." But it's used only by middle-aged women. And it involves inhaling. You say, [GASP]. And that marks you as a middle-aged Finnish woman. [LAUGHTER] So if you're ever attempting to sound like a middle-aged Finnish woman, you should use that a lot, yeah. But I've never heard of a language that had-- there are languages that have a series of nasal vowels, for example. But there aren't languages that have a series of inhaled vowels. That doesn't happen, no, as far as I know. And then trills, these don't involve a special airflow. But I wanted to talk about them partly because they came up in class. AUDIENCE: [TRILLING] NORVIN RICHARDS: Yeah, like that. So trills involve getting some type of-- some part of your vocal tract to flutter. And there are ones that are comparatively famous, like the so-called rolled "r." These are some examples that are going to sound like that. This is a language. This is a Dravidian language called Toda that has dental trills and also postalveolar trills. So you can listen to those. Sure we can. Come on. Wake up, website. I take it back. We can't listen to those. Why can't we listen to those? I was listening to them just now. What is your deal? Never mind. We'll go to this other one. So the rolled "r" the [PURRING] sound, that's the alveolar trill, where you're holding your tongue fairly close to your alveolar ridge, and getting the air to flow, and holding your tongue in such a way that your tongue flutters in the breeze as it goes by. English doesn't have that. But if you want to learn to speak Italian, or Spanish, or a variety of languages of Australia, you need to learn how to roll your "r"s. It's possible to be physically incapable of doing that. In fact, so for example, there are Italians who cannot roll their "r"s. But it's not all that common. Yeah, it's the kind of thing that is a speech impediment if your native language has rolled "r"s. If your native language is English then you're off the hook. There are also languages that have uvular trills. That's one possible pronunciation of the French "r'' or the German "r." [PURRING IN BACK OF MOUTH] I can only do a uvular trill with my head tilted back slightly. So yeah, that's, again, getting your uvula to move in the breeze. AUDIENCE: Gargling. NORVIN RICHARDS: I'm sorry. AUDIENCE: Like gargling. NORVIN RICHARDS: Like gargling. And then, there are languages that have bilabial trills, which was what Raquel was doing for us. Let me see if I can get these to come out. Ah, come on. All right. The UCLA website. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: There we go. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: So that's the sound that begins with a bilabial trill. [BLOWING] AUDIENCE: [BLOWING] NORVIN RICHARDS: [BLOWING] Yeah, you're better at it than me. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Oh, here's another one. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Yeah, so this is Kele, it's a language spoken in Gabon, and Titan, which is, I believe, a Polynesian language. These are languages that have bilabial trills. Is that a question? AUDIENCE: Question. Remind me why rolled "r"s are called "rolled 'r's" again. NORVIN RICHARDS: That's the-- so in linguistics, they aren't usually called that. They're called alveolar trills. We call them rolled "r"s, I think, because they sound a little bit like a drum roll. It's like there's a repetitive sound. [PURRING] You're repeatedly striking the roof of your mouth. That's what I always thought that term referred to. But as I say. It's not a linguistic term. AUDIENCE: Remind me why drums rolls [INAUDIBLE].. NORVIN RICHARDS: Oh, that's a good question. Well, because they sound like rolled "r"s. [LAUGHTER] I guess they sound like something is rolling and repeatedly-- so they involve a repeated strike of something. So if you have something rolling, bumping down a staircase or something, then it might sound a little bit like that. I don't know. I am now making things up. I will try to flag it when I'm making things up. But yes, I don't know. That's a good question. You should go look it up in the OED or something. Maybe they'll have a theory. Yeah, other questions? All right. So enough of the bilabial trills. And so yeah, as I say, I'll put the UCLA sound files up there so that you can play around with them. Oh, sorry. I spoke too soon. So we've been talking about sounds for a little while. And I've been encouraging you to try to make the strange sounds. You've all been clicking at me and trilling at me. It's been great. But there is another way to study speech sounds. And I just wanted to show you how it works. I hope this works. We'll see. It's possible to make what are called spectrograms. And maybe what I'll do is just show you one. And that'll make it easier to talk about what they are. Let's see if I can get this to work. You. Let's see if we can see that. There. That's a spectrogram. Let me Zoom in on it. Yeah, there we go. And I can zoom in on it a little further. It isn't all vowels, is it? So this is a program called Praat. It's a freeware program. I'll put it up online. It's something that allows you to make spectrograms. As you can see, it's very easy. It's a pretty easy program to use. I'll put up-- it's a program that's been around for a long time. It was developed by linguists and not by software engineers. And you can tell. So it's a little bit-- the interface is a little bit clunky. But I'll put up some suggestions about how to use it. And I encourage you to download it and play around with it. It's fun. What's going on here, the thing on the bottom of this window is the spectrogram. And it's a spectrogram showing me making those vowel sounds, "a," "i," and "u," transitioning between them. I think I might be able to play what we're looking at here. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Yeah, so I didn't keep very much of the "u." That we can think just about the "a" and the "i." What you're doing when you are making vowels especially, but actually when you're speaking generally, is your vocal cords, in this case, my vocal chords were vibrating. They were making the air vibrate. There's this column of air that comes up through my vocal tract. They're making that air vibrate. And then, those vibrations are interacting with other things I'm doing with my vocal tract. They're bouncing off the sides of my vocal tract. And as I change the shape of my vocal tract, I change what those vibrations are doing. Yeah, it's a complicated phenomenon. You end up creating this complicated waveform, which you can do Fourier analysis of and analyze it as a waveform that has a bunch of regular sine waves of various frequencies and amplitudes that are contributing to it. Yeah, and this is what you do when you do a Fourier analysis, yeah. And what the spectrogram does is mark for you the frequencies that are making the greatest contribution to the general complicated air waveform. There are the darker parts. So these dark bands that you're seeing are what are called formants. And they are the things that are contributing the most to the sound. As you can see, they're shifting as I shift from "ah" to "ee." So this formant down-- no, let's see. Can we see my cursor on this? Yes, this formant right here, you can see as I shift from "ah" to "ee," it goes up. Yeah? And this formant up here, as I shift from "ah" to "ee," it dips. So how far apart the formants are from each other is what distinguishes the vowels from each other. You're hearing this difference in formants. You can do this for yourself. Here, I'm going to do another one. Hang on. "Wah." Yeah, so here's another one. I didn't get a very good recording of that one. Try that again. "Wah." Well. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Hmm. Hmm. That's not doing what I'm expecting it to. Oh. Oh, I think I see what it's doing. It has to do with the fact that the microphone in my computer is lousy. You can see the formant shifting for this one. Actually, you can't see them shifting all that much. But you can hear a formant shifting. If you do the shift-- oh, no, wait. I think I can think of something that will show this better. Hang on just a sec. One more thing. Stop. Stop. Stop. Let's try "ee-oo." There we go. That's better. So here, you can clearly see a formant going from up here to down here as I transition from "ee" to "oo." Is everybody seeing that? I'll play it again. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: I sound like a dolphin or something. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: Yeah, but as I switch from the "ee" to the "oo," you can see a formant descending, yeah. And you can hear that. If you do the-- so this is something that we were doing back when I was trying to convince you that vowels were worth categorizing, that the switch from "ee" to "oo" involves your tongue going from being front to being back, and also your lips going from being not rounded to being rounded, yeah. And if you whisper that to yourself, so go [SLURPING],, you might be hearing something descending in pitch. Are you hearing that? Just go [SLURPING]. And that's what you're hearing. It's that formant. Yeah, by whispering, you're getting that to leap out at you. And you can hear it descending in pitch. Why is it descending in pitch? If you think about what you're doing, think about the inside of your vocal tract. As you go from "ee" to "oo," let's talk first about "oo." So "ooh" ["u"] is a high, back rounded vowel. Was any of that surprising? Yeah? "Oo." So your tongue is high and bunched up in the back of your mouth. And your lips are rounded. Ooh. Yeah, that means that so what you're doing when you're making vowels is you're taking this column there that's in your vocal tract. And you're, depending on the vowel, making a partition somewhere that isn't complete. You're not blocking off part of the tract. But you're putting your tongue in the way. So you're making part of that column of air narrower. Does that make sense? So for a high back rounded vowel, by bunching your tongue up toward the back of your mouth, you're making not a complete obstruction of the flow of air. There's still space for the air to flow past it. But you're making it narrower back there, yeah. And when you make an "oo" by making a high back gesture with your tongue and by rounding your lips, you're creating a compartment for your vocal tract, which is as large as you can make it. Your lips are out there in front. And your tongue is as far back as it goes. Yeah, does that make sense? You're compartmentalizing off that top space of your vocal tract. And you're making that space as large as it can be. As opposed to "ee" ["i"], where your lips are not rounded and your tongue is toward the front of your mouth. So that makes that space that you created for ooh. That makes that space collapse. It makes that space as small as it can possibly be because now your tongue is shoved up toward the front of your mouth. And your lips are not rounded. The space between your tongue and your lips is now as small as it gets. Yeah? And that's why the transition from "ee" to "ooh" involves something descending in pitch. It's because you're going from a very small compartment with a little bit of vibration inside it to a large compartment with the same amount of vibration inside it because your vocal cords are vibrating at a constant rate. We're not-- you're not fooling around with pitch. Does that make sense? So as you switch from a small compartment to a large one, one component of the very complicated things that make up this complicated waveform is descending in pitch because you're creating this little space that's getting larger, and by putting the same amount of energy to it, the molecules of air are not moving as fast. That's what we hear as a lower pitch. Does that make sense? It's a way of talking about this, anyway. So encourage you to fool around with Praat. Let me just show you some consonants so that you can see what else you can do with Praat. We'll try another one here. "Bab. Dad. Gag." View that one. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: So here, I got the same vowel all three times. And you can see these nice clear bars in the middle. The difference has to do with what the consonants are on either side of the vowel. One of the things that this representation, this spectrogram, allows us to concentrate on, if you think about it, how do you tell the difference between "bab" and "gag"? So we know articulatorially what the difference is. The difference is that, for "bab," you're making a bilabial stop. And for "gag," you're making a velar stop. But when you're hearing people make those sounds, how are you hearing what the difference is between a B and a G? Bear in mind that B and G are both stops. The flow of air is stopped, yeah, while you're making the sound. So while you're making the sound, you're not hearing things. Yeah? The way you distinguish "bab" from "gag" is by looking at what it does to the vowel that's in between. You can see that these formants for the "ah" vowel, and remember that the vowel was the same in all three cases, it's especially clear for this last one. Here's "gag." And you can see that the formants, this format is smile-shaped. And this formant is frown-shaped. This one goes down as it goes close to the stop. And this one goes up as it goes close to the top. As opposed to the bilabial, which is, if anything, slightly down on both sides. This is one of the things that makes speech perception so hard and that makes it so hard to get machines to understand speech. It's a problem that's more or less solved. But it's a hard problem because your speech is not just a sequence of sounds. It's a sequence of sounds that all influence each other. It's a mess in other words, yeah? And the best cues for what sound you made at a particular point might not be at that point at all. They might be on neighboring points. That's how you find out what sound you're looking at. And that's what makes speech perception hard. It's also part of why-- so there are generalizations about the kinds of syllables that we find in the world. There are plenty of languages out there in which it's OK for a syllable to start with a stop. Let's say you can have a syllable like "pa." That's an English word more or less. It's a word for father, "pa," but in which it's not OK to have a P followed by a fricative and then a vowel. So you don't have syllables like "psah," in English, or "pfah," yeah, or "pkah." You can imagine that there are languages that allow syllables like that. But if a language is going to restrict its syllables, if it's only going to have one of these kinds, it's only going to have this kind. Or to put it another way, there are languages that have both of these. And there are languages that only have this. There aren't any languages that only have this, yeah? And we can see why. The crucial cue to what, kind of, stop you're hearing, it comes from the vowel. So if you put a fricative in between the stop and the vowel, you deprive yourself of one of the big cues for figuring out what, kind of, stop you're listening to. And so it makes sense that we get this type of syllable everywhere. And the other type of syllable is only-- has a higher degree of difficulty, basically, as far as perception goes. Yeah, Joseph. AUDIENCE: What effect does that have on [INAUDIBLE] especially. NORVIN RICHARDS: Yeah. AUDIENCE: For one example, where it's perfectly OK to string three stops in a row. NORVIN RICHARDS: Yep. AUDIENCE: Or at least two. NORVIN RICHARDS: Yeah. AUDIENCE: [INAUDIBLE] then you can just-- NORVIN RICHARDS: Right. So I've shown you the easiest way to figure out what a stop is, which is by looking at its effect on the nearby vowel. But you're absolutely right. There are languages out there that have strings of stops, strings of stops and fricatives. It absolutely happens. Russian has words like [RUSSIAN] it's got of a whole string of stops and fricatives at the beginning of a word, a beginning of a syllable. The most famous examples of this are languages-- there are two kinds of languages that are especially famous for this. There's a Salish language called Bella Coola. The Salish languages are spoken in the Pacific Northwest. Bella Coola, there's only one word of Bella Coola that I know. And it's this. So this is a uvular fricative, a voiceless uvular fricative. Anybody want to try to pronounce that? It's something like sauce. [LAUGHTER] It's the only thing I know how to say in Bella Coola. If I ever meet a Bella Coola, we'll have a very simple conversation. It means seal fat. And so here's a place where you hope there are other cues besides effects on vowels because there are no vowels in this word. Or the Berber languages of Northern Africa are famous for having sounds like this. Here's the only word I know, the only word I know in [INAUDIBLE] Berber, which is spoken in Morocco. It means "you (feminine) sprained it." And it's something like [CLICKING] Phonologists just get really interested in these languages for lots of reasons. One is the kind of thing Joseph is asking about, how do these people figure out what sounds they are hearing? And the answer has to be something like, yeah, there are lots of good cues on vowels. But there are also cues on other kinds of things. And we have to figure out what they are, how-- what people are cueing in on. There are other questions too, like I was just talking about. Syllables, which is something we'll talk about as we get further into phonology. And the question is, are there any syllables in this word? To which, the answer turns out to be yes. So if you look carefully at these languages, you can convince yourself that they have syllables, yeah. Sorry, very long answer to Joseph's question, to which the short version of the answer would be the best cues are on vowels. But there must be cues on other kinds of things as well, which are not as clear. All right? And it's a question of your level of tolerance for unclarity, yeah, or how good you are at picking up on those cues. Cool. So Praat, was i going to do anything else with Praat? No, those were the things I was going to do with Praat. I will leave Praat up. I think-- actually, we may be seeing Praat again pretty shortly. But I'll leave Praat on the website so you can play around with it at home. Going to full screen mode. Oh, sorry. Go back out of screen mode. So we've just been talking about how you know what sounds it is that you are hearing. And some of the cues come from what's happening during the sound. And some of the cues come from neighboring sounds. And the study of this is one of the central studies of phonology and of phonetics. I want to show you about another, kind of, information that we have, though. It's something called the McGurk effect. And I'll just demonstrate it to you. And then, we'll talk about it. This guy is going to repeat a syllable a couple of times. And then, when he's done, I'm going to ask you what syllable you heard. So here's the guy. [AUDIO PLAYBACK] - Bah. Bah. Bah. Bah. Bah. Bah. [END PLAYBACK] NORVIN RICHARDS: I'm going to turn up the volume and do that again so you can hear it better. [AUDIO PLAYBACK] - Bah. Bah. Bah. Bah. Bah. Bah. Bah. Bah. Bah. Bah. Bah. Bah. [END PLAYBACK] NORVIN RICHARDS: How many of you heard a "bah"? How many of you heard a "da"? How many of you heard a "gah"? Several of you are raising your hand several times. [LAUGHTER] You're just not sure what you heard? Or you thought you heard several things at once? Yeah, you thought you heard several things at once. So there's something very confusing going on. I'm going to play it again. And I want you to close your eyes. AUDIENCE: [INTERPOSING VOICES] NORVIN RICHARDS: Everybody got your eyes closed? [AUDIO PLAYBACK] - Bah. Bah. Bah. Bah. Bah. Bah. [END PLAYBACK] NORVIN RICHARDS: I'll put the link so that you can play with this at home. But here's what's happening. You're hearing a recording of "bah." That's what he's doing. He's saying "bah." But the "bah" is being carefully timed together with a video of him saying "gah". [LAUGHTER] And so you can only tell that it's "bah" if you don't look at him. [LAUGHTER] So if your eyes are closed, you hear it as "bah," yeah? But if your eyes are not closed, if you look at him-- you should try this at home maybe if you're on a better quality screen. The illusion becomes fairly clear. People generally report that they're hearing "dah," which is neither what they are seeing or what they are hearing, yeah? So what this is teaching us is that although there's all this complicated phonology going on, all of these things that you do to try to figure out the speech signal, you are also paying attention to what you see. So apparently, what's going on here, again, the recording is of "bah." But you're looking at this guy. And he's clearly not saying "bah" because his lips are not closing. And apparently, the evidence of your eyes is enough to convince your brain to ignore your ears when it's trying to figure out what you're hearing, yeah. So the cause of the illusion is that because you can see that he's not making a bilabial closure, your brain is rejecting the hypothesis that it's a bilabial closure and trying to come up with the closest sound that it can. And you end up with confusion. Yeah? AUDIENCE: So then, what would language acquisition look like in individuals who are born blind? NORVIN RICHARDS: Yeah, so that's a really good question. I don't know whether there's any work on that, that question of whether the McGurk effect has anything to say about that. We obviously can get along just fine without seeing the people that we're talking to. We can talk to people on the phone and all kinds of things. But we are apparently geared to pay so much attention to visual input that it actually overrides the evidence of our ears. And it's a very powerful effect. I know what's going on. And it happens to me. You know what's going on now. Then it will happen to if you watch this again. Again, I'll put the link up. And you can amaze your friends with it. It's a very robust phenomenon. It's been demonstrated for a bunch of different sounds and in a bunch of different languages. It's not just an English thing. It's all over the place. Now, am I done with other things? Well, now I am about to start talking about something which is pretty complicated. And so I'll just give you-- yeah, so I'll tell you briefly what I'm going to talk about next time. And we'll just start with this next time because I don't think we have time to talk about this in any detail. We've been talking-- well, here, let me show you something. And then we'll talk more about it next time. Back to Praat. "Pie. Bye. Spy." Ah, darn. I failed to save that. Hang on just a second. Pi. Bye. Spy. So that's me saying "pie, bye, spy." [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: I'm sorry, the-- everything is up again. So here's-- let's just listen to "pie." Actually, let's just look at "pie." Can I do that? Come on, you. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: So that's the spectrogram of me saying "pie." And you can see-- here, I'll use my cursor instead. What's going on here, my-- here's the stop. And then, this little burst is my lips opening. And then, there's a little burst of white noise. And then, we start in on the vowel. And you can see the formants, the black bars that indicate that we're doing a vowel. This little burst of white noise is the period that elapses in between my lips opening and my vocal chords beginning to engage. So there's this period where there's nothing coming out of my lungs but air. There's always air coming out of my lungs. Let me try that again. There's nothing coming out of my vocal tract but air. Yeah? My vocal chords aren't vibrating yet. And it's not long. It lasts some fraction of a second. How long does it last? It lasts-- who cares how long it lasts? Do I care? Do I care long enough to find out? It lasts less time than that. It lasts 0.06 seconds. Yeah, there we go. So that's what's going on in "pie." Now let's look at "spy." [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: And you can see-- and actually, allow me to let you hear. So here's the "s." This is what fricatives look like, these bursts of light and noise. Lots more energy up here for the "s." Here's the stop. You can see what stops look like. It's blank. Here are my lips opening for the end of the stop. And let's just play it starting with the stop. [AUDIO PLAYBACK] [END PLAYBACK] NORVIN RICHARDS: How many of you hear that as "pie"? How many of you here it as "bye"? Yeah, so this is a non-aspirated voiceless stop. So my lips are opening. And then, my vocal cords get busy right away. You can see there isn't a big gap right in between the "p' and what's happening after it. So we will talk more about this next time. But this is what's called aspiration. English has aspiration in some places, but not in others. So we'll do this demonstration again next time. And then, we'll talk about what it means. So for now, I think we're done unless anybody has any questions about any of this. All right. See you not on Tuesday, but on Thursday. So we don't have class on Tuesday. |
MIT_24900_Introduction_to_Linguistics_Spring_2022 | Lecture_17_Syntax_Part_7_and_Semantics_Part_1.txt | [SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: Today, we are going to finish syntax, not in the sense that you will now know everything there is to know about syntax. People spend their lives studying syntax. In fact, that's what I'm doing. But you'll know enough for 24.900. And if you're interested in these things, there is 24.902, which is all about syntax-- introduction-- more of an introduction to syntax. So I was in the middle of talking, last time, about shortest move, and so I'm going to back up a little bit and we'll talk through the cases that I was talking about last time. Before I do that, though, I wanted to say, I got a really interesting question after class last time from someone who basically wanted to know whether it could be shown that the kinds of universals that I was showing you, observations like there's wh- movement to the left, but there isn't wh- movement to the right, or there's V2, but there isn't anti-V2-- Somebody wanted to know whether it could be shown that the absence of these things was statistically significant. That is, if you look at the languages of the world, would we expect to find anti-V2 is V2 common enough that it's surprising that the anti-V2 is absent. So I am working on working up some stats, and I will probably send out an announcement about that stuff later. So keep your eyes peeled for that. In working up some stats, I'm having a look at a website, which I can recommend. It's called WALS-- oh dear, I was about to write the URL, and then I realized I don't know how. If you Google WALS-- WALS stands for the World Atlas of Language Structures. And I think it's something dumb like WALS.com or something like that. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: WALS dot what? AUDIENCE: Dot info. NORVIN RICHARDS: Info, huh? No wonder I don't know the URL. Language Structures-- Thank you. It is a typological database, gathering typological information about a bunch of different languages. So it has thousands of languages, observations by various linguists who work on them categorized by various phenomena. So there are all these parameters that you can look up. You can find out how many of these languages have the verb before the object. How many of them have the object before the verb. You can get it to make you a map of where these languages are and all that cute stuff. And they take a stab at having a typlogically balanced genealogically-balanced survey, I guess. So the languages are not all Indo-European. They're from all over the world. There's just all kinds of stuff there. Some of the generalizations that I was showing you, it's trivial to show that they are statistically significant. So, for example, there are according to WALS, roughly a third of the world's languages have wh- movement to the left. That is, they're like English. And the other 2/3 have wh- in situ. So if that is wh- goes wherever the wh- would normally go, it doesn't move. So when I told you that there is wh- movement to the left and there isn't wh- movement to the right, I'm not a mathematician, but that looks statistically significant, given that they're studying thousands and thousands of languages. It's surprising if you thought that wh- movement to the right was, should be, just as common as the movement to the left. The absence of that in any of the languages we've studied is surprising and significant. But as I say, I'll write up some stats and send them to you. Questions about that or anything else that we did last time? OK. This, as I said, we had gotten started on this, and I'll just back up and go through this again. We were talking last time about restrictions on movement. So I've given you the beginnings of some reasons to believe that there are cases where forming a tree for a sentence, forming a syntactic structure, isn't just a matter of merging things to each other, but that, sometimes, you take something which has already been merged and you move it someplace else. And we've talked about different kinds of movement. And last time, towards the end of last time, I was starting to show you examples kinds of movement that you couldn't do. So I think one of the examples we talked about, you can say things like "She ordered a hamburger and fries." But that if you were to take "fries," replace it with a wh- phrase, and tried to wh- move it, giving you "What did she order a hamburger and?"-- even if you spell "did" correctly, which is surprisingly challenging!-- the result is very bad. So I showed you some examples like that, where if you arrange to put a wh- phrase in a certain kind of structure, it somehow can't escape, calling those structures islands. There's some kinds of wh- movement that you just can't do. Does this sound familiar at all? We were doing this last time. And then I said, so there's big literature on syntax identifying these islands and trying to figure out what are the structures that you can't move out of. There's actually-- and then attempts to develop theories of why certain things are islands. Why are certain kinds of things opaque to extraction. And I wanted to show you one theory of one particular set of examples like this, examples of places that you cannot move out of, because, well, they-- the explanation for why these particular things are opaque is particularly straightforward. So old observation, actually, that a number of the restrictions on movement that people have found can be unified into a single condition that has been called various things and formalized various ways. But today, I'll call it shortest move. It says, basically, if you have a choice between two movement operations, you should pick the shorter one. So pick short moves over long moves. And I gave you this definition of "short." So one of the things people do in this literature is try to figure out exactly how we ought to define "short." So one way to define it would be to say, one movement is shorter than another movement, if it crosses a smaller number of nodes. So you start off dominated by a certain number of nodes, and you move to a position where you are not dominated by some of the nodes that used to dominate you. Let's call that your path. So the path is the set of nodes that dominate the position the moved item came from and don't dominate the position where it lands. And what we're saying is you want your path to be as short as possible. So if you're comparing two moves, and then there are various ways to formalize this, but one is, if you're comparing to moves, count the nodes in the paths, and pick the one that has the smaller number. Another way to formalize it is to say, this only works if you're comparing two nodes that have overlapping paths and you are supposed to pick the one that is a subset, the path that's a subset. So that's a story to tell here. I'll show you some examples. The first example, which I showed you last time, was the head movement constraint. So you can move heads but only over very short distances. So English forms yes/no questions by moving T into C. So you can say, ask questions, like "Will Mary type novels?" where you take what's in T, which is "will," and move it into C. That's a way to form that kind of yes/no question. But I said, you do not, in English or in any language, form yes/no questions by moving the verb into C past the auxiliary. So English can't ask questions like "Type Mary will novels?" This is out. I'm showing you this with yes/no questions in English. I could have shown it to you with V2. So the same thing holds in V2, verb second, German phenomenon, and many other languages in which something has to be in C. And what happens is, whatever head is highest, moves into C. You do the shortest move. In German, you move the auxiliary, and if there is no auxiliary, you move the main verb. If there are multiple auxiliaries, you move the highest one. So this is one example of shortest move. Oh, I did it again. Sorry, I haven't fixed this yet. The path from I to C, which is really the path from T to C, sorry, the path from where will is to where C is, consists of T bar and TP. That is, those are the nodes that dominate T and don't dominate C. But the path from V to C consists of VP, as well as T bar and TP. Do people see that, even though I wrote the wrong things on the slide? So please mentally search, replace I with T, on this slide. I'm using an old term for IP, which I meant to go back and fix. There are no IPs anymore. So this is an example of how you use paths to explain to yourself what you mean when you say that one move is shorter than another. As you can see, this is also a case in which one of the paths is a subset of the other. And then the second kind of case, which we talked about last time, and this is one of the reasons that this is only as far as we got because this case is a little shakier than the first one. The head movement constraint is extremely strong. If you violate the head movement constraint, then your sentences are almost uninterpretable. Superiority-- there are all kinds of complicated semantic things going on that allow you to sometimes violate superiority. But, at least in some cases, out of the blue, if I ask you "Who bought what?" that's better than asking "What did who buy?" And I was getting general agreement with that last time. One of the things I said last time, and I think this is true, is that superiority judgments are particularly strong if you are talking about-- if there are only two answers. So wh- questions-- multiple wh- questions, what you're asking for is a list of pairs, usually. So if I ask you "Who bought what?" the answer is going to be, well, you know, "Sally bought chocolate, and Bill novels, and Mary bought a computer." So it'll be a list of people and things, such that the people bought the things. Superiority-- we talked about this last time-- but superiority judgments are strongest when there's only one pair. So if you see two people fighting, you might ask "Who hit who first?" Compare that with "Who did who hit first?" For me, at least, that's a particularly strong judgment. Are there people who prefer one-- let's see. Let's see a show of hands, who prefers the first of these to the second? Who prefers the second to the first? Who finds them both fine and is not sure why some of you raised your hands the first time? So there are a few of you who don't care whether there is only a single pair. But for many of you, this one is better than this one. There's something wrong with this. So this is a phenomenon which is not as strong as the head movement constraint but resembles the head movement constraint, in that, there's a preference when you've got a choice between two things which of them to move. There's a preference for moving the higher one, the first one. And then here's the case number three, which we didn't get to last time. These are what are called wh- islands. "Island," again, is a name syntacticians use for these positions that you can't move out of. There's this metaphor, I guess, that involves wh- phrases not being able to swim. If they're stuck on an island, then they're stuck. So here's a sentence with an embedded wh- question. "I wonder what Bill gave to Fred." So this whole sentence is not a question really. It's a statement. I'm telling you something about my mental state. My mental state is one in which I don't know the answer to a particular question. And that question is, "What did Bill give to Fred?" So there's wh- movements happening in the embedded clause. What has moved to the edge of the embedded clause. Is that clear? Does that make sense? That's what's going on in this sentence. Now observation about embedded questions. If I ask you, "Who do you wonder what Bill gave to?" you are upset. So what did I do. Well, I turned Fred into a wh- question, a wh- word, zap. Now Fred is who, and I moved "who" out of the embedded clause into the matrix clause. "Who do you wonder what Bill gave to?" where the answer is, well, I'm wondering what Bill gave to Fred. It ought to be able to mean that, but it really, really can't. There's something badly wrong with this sentence. So this is called a wh- island. A wh- question, like "what Bill gave to Fred," is an island. You can form that. That's what the first sentence says. "I wonder what Bill gave to Fred." But you can't wh- extract out of it. The proposal people have made-- well, look, there are at least two possible things that could be wrong with the green arrow there, the one that's going from the embedded clause into the matrix clause. One is, here I am in the matrix CP, and I'm trying to decide what to move. And there are two wh- phrases. There's "what" and there's "who." I should really have a tree here. Let me draw a quasi tree. To "wonder"... Here we have "what." Then, I'm out of words, so I'll just use a triangle here, to who. So this is "You wonder what John gave to who." I know my handwriting is terrible, but can people see that that's what that tree is meant to be? So this is the tree for what you-- for the second sentence, this is going to be my attempt to draw the second sentence. So "Who do you wonder what Bill gave to?" So we've got-- we can fill in some of the gaps here. "Bill gave (blank) to who." So we first moved "what" into the specifier of the embedded CP. And now, this is one of the few times in my life I wish I had colored chalk. Now we're going to "who," and we're going to move it up to here. And there's something wrong with this. Try to figure out what's wrong with it. The shortest move story about what's wrong with it goes like this. Well, actually, there are two things that you could blame this on. One is, when we're here and we're trying to decide what to move into this position, well, at this point, there are two wh- phrases, "what" and "who." And "what" is higher. So maybe one thing that's wrong with it is, basically, superiority over again. You've got two wh- phrases. You have to pick which one to move. You should move the higher one, if you didn't. So that you didn't pick the shorter move. And the other objection you might have to this green arrow goes like this. The wh- movement is to the spec of CP. Here's "who" sitting here. If you're going to move "who," well, you shouldn't move it here. You should move it here. Sorry. Let me do that without putting my body in the way. You shouldn't move it here. Here's a spec of CP to which it's going to land. But really what you should do is move it here. And you can't, because there's a wh- phrase here. This phrase is stuck. So the ill-formedness of the sentence, the star, is this feeling you're having trying to satisfy multiple conflicting requirements. There's a Greek tragedy going on here. "Who" absolutely needs to land here, but there's "what" in the way. And so, "who" wants to kill its father and marry its mother, and then, bad things happen. You can't do everything that you want to do. So this wants to make the shortest possible move and it can't. And it's unhappy and it pouts. And that's got the shape of a star. That's a story, anyway, about why wh- islands are islands. Do people see the problem? That, that sentence is bad. And it's not like it's unclear what it means, but you just can't say that. Yeah? AUDIENCE: How would you say it? NORVIN RICHARDS: You can't. "Who do you wonder what Bill gave to?" The closest you can come to saying it in English is to say things like, "Who is it that you're wondering what Bill gave to him or her." So sometimes, for some people, putting in a pronoun improves these sentences. For me, the effect is not so strong. There isn't a very good way to say this. There are languages in which this is OK, which is an interesting-- there are questions about why. But in English, not so much. Yes? AUDIENCE: What about "What do you wonder Bill gave to who?" NORVIN RICHARDS: "What do you wonder Bill gave to who?" So the point-- so one of the things I just said was, when we're asking what went wrong with the sentence that I've got up there, "Who do you wonder what Bill gave to?"-- one thing that you could blame it on is, you moved "who," and "what" is closer. What you should really have done is moved "what." I don't know about you guys, but, for me, "What do you wonder Bill gave to who?" I can't say. So this is another one of these Greek tragedy things. Shortest move really wants this to be the thing that moves. But the result is that, let's think about what wh- movement does. It makes questions, right? So when you move "what" here, you're making this into a question. You wonder what Bill gave to Bill-- to Fred or whatever. If you move it up here, you're making this into a question. And maybe what we're seeing is that you can't do both. And we could demonstrate that without doing an wh- island. So it's also bad, at least for me. I'm going to star this. It's also bad, at least for me, to say things like what do you wonder Bill gave to Fred. This is also out. So it's just independently impossible for "what" to start here, move here, and then move here. It's as though "what" can't be-- you're attempting to form two questions with a single wh- word, and you can't. AUDIENCE: I thought the last one is OK. NORVIN RICHARDS: "What do you wonder Bill gave to Fred?" AUDIENCE: No. you could say "What do you wonder whether Bill gave to Fred" is more normal to me, but also [INAUDIBLE]. NORVIN RICHARDS: Yeah. For me, at least, I can't say this. But I agree with you, that if I were to make it, "What do you wonder if Bill gave to Fred"-- that's still bad, but it's better than this. That's my feeling about it. Joseph? AUDIENCE: "Wonder" replaced with "think" seems perfectly fine. So [INAUDIBLE]. NORVIN RICHARDS: So notice-- good point. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Right, that's it. So "wonder" selects for an interrogative CP. You can say "I wonder who left?" "why they left," "where they went," but not-- or "whether they left." There are various things you can say, but they're all questions. So "wonder" selects for a question. And "that she left" is no good. "Think" is the opposite. You can say, "I think that she left," but not, "I think whether she left," or "I think we left." Things like that. Yeah, so you're absolutely right. If I make that "think," "What do you think Bill gave to Fred?" then "what" is going to be able to move up here, because "think" doesn't care, in fact, doesn't want this to be a question. But "wonder" does want this to be a question, with the consequence that if "what" goes here, it's as though it's stuck. It's got a job. It needs to make this a question. If it moves, then it isn't a question anymore. Something has gone wrong. So we have two conflicting things. Your shortest move that says, in this example, "what" should move up to here should really be "what." And then, you have "wonder," which says, "No, 'what,' you have to stay here. You have a job." And these two things can't be reconciled. Notice that we are out of the domain of optimality theory here. So if you remember optimality theory, back when we were doing phonology, we were talking as though, sometimes, when you have two things and they're fighting, you just try to find some middle ground. You find the thing that's best or you decide who's more important. That's not the way I usually settle fights between people, but apparently, in optimality theory, that's what you do. You decide which of these requirements is the most important one. The way I'm talking here is not that. This is more like, here you have two conflicting constraints and so you die. You can't do anything. There's no way to make everybody happy. So this has been an exercise in, well, first, showing you some other places where movement is barred, and then, also, giving you just a hint of a very large and interesting literature in which syntacticians look at a variety of phenomena and try to unify them. So head movement constraint, and superiority, and wh- islands are all things that were discovered independently, at different times, by different people. And then another generation of syntacticians came along and said, no, these all belong in the same box. And that's where we are at this point. Lots more work like this to talk about. But this is 24.900, and although I am a syntactician-- I looked at the syllabus, by the way, and we're actually, more or less, on track. I was astonished. We're supposed to stop talking about syntax today, and so, we will. One more thing for me to show you about syntax. It's actually kind of connected to this, to Joseph's point. Here's a sentence. "She will say that we should buy kumquats." Questions about this sentence? Does this make sense? Did I put anything in here that's surprising or disturbing? I think I have a new thing in T, which is "should," but maybe that's not so alarming. And now what happens if we change "kumquats" into a wh- word, stop, and we wh- move it to the beginning of the sentence. Well, so that's me turning "kumquats" into "what, and also turning the TP into a triangle, because I was running out of room. And then we're going to move what into the matrix clause, so we'll get what will she say that we should buy. So we've done wh- movement of "what," and we've also done head movement of "will" from T into C. I haven't drawn an arrow for that. And that's how we form that wh- question. But given everything I just said about shortest move, you might worry. You might say to yourself, but wait, isn't CT a closer place to land-- that lower CT, that we should buy, isn't that a closer place to land? After all, wh- words move to the specifier of CP. Here is an example of where there are two CPs. So what should we do? And there are various theories you can have about that. One would be to say, yeah, a wh- phrase has moved to specifer of CP, but it's specifically interrogative CP. They form questions. So not that kind of CP. But another kind of theory we could have, would be to say, no, shortest move is correct. The fact that "what" can, in principle, land in the specifer of CP, means that it must. So it's not possible to do the arrow-- bless you-- the longer arrow that I showed you on the previous slide, where "what" just starts down there and moves in one hop up to there. Movement is always-- the name for this is successive cyclic. It's always in little hops. You can't move from one CP, out of that CP, into another CP, without stopping at the edge of the CP that you started out in. This is something that was proposed by Noam Chomsky, who, decades ago-- and when he first proposed it, it was met with widespread derision. People were like "Just because of the shortest move constraint, you're going to say that we're doing all these little hops." And then, there was immediately a flood of really good evidence that this is true. It really does work this way. I'm going to show you one piece of evidence that this is how wh- movement works. It's going to be kind of involved, so I'll try to go through it slowly. Did you have a question? AUDIENCE: Yeah. So if we moved it up to where the CP is, then it would be like, "She will say what that we should buy." I feel like that sounds weird. I would say like, "She will say what we should buy," or something-- I don't know. NORVIN RICHARDS: Yes. No, you're right. So "say," and we talked about-- there are verbs, like "wonder," that need to question. There are verbs like "think" that need to not have a question. "Say" can have either one. So you can say, "She will say that it is raining." Or She will say who left." So "say" has both options. It can have either a question or a statement. But actually, the evidence that I'm going to show you about successive cyclic movement shows that it actually doesn't matter whether the higher verb can select for a question or not. That even for verbs like "think," that have to select for statements, you get successive cyclic movement. So you're raising a very good point, which is-- I'm sorry, I'm going to hijack your point. This movement, this first movement, it's not driven by the need to form a question. It's not that this wh- phrase is moving here to make that embedded clause a question. It's just moving there. And then there's this question about why, like why does successive cyclic movement happen. But what I'll try to show you now is that it does happen. And so, we need to get used to it and try to understand why. But we need a theory of why, because it's true. It does happen. I haven't yet shown you have any evidence that this is true. So far, all I'm doing is asserting through the force of my personality that this is how the wh- movement works. Let me now show you some evidence. The evidence involves-- do we have evidence that this is right? Yes. The evidence comes-- there's lots of evidence for this. But I want to show you one of my favorite pieces of evidence. It comes from a language called Dinka, which is a Nilotic language. It's spoken in South Sudan and in lots of diaspora communities, including a big community here in Boston, by about 2 million people. Its phonology is lots of fun. We did a field methods class on Dinka few years ago. It has all kinds of shenanigans it gets up to with vowels. Actually, let me see, I can play you these shenanigans in a way that you can hear. And I will see whether this works or not. So it has long and short vowels, which itself, is not all that unusual. But it has three different levels of vowel length. So there are short vowels and long vowels and really long vowels. So here are three minimal triple. You've got "mouse" and "charcoal" and "pieces of charcoal," both singular and plural. So here's the word for "mouse." Let's see if I can play it so you can hear it. No, I can't. [SPEECH SOUND] Oh, I think I know what happened. I will put these-- [SPEECH SOUND] if people will be quiet, they may be able to hear this. [SPEECH SOUND] And here is the longer version. [SPEECH SOUND] That's a longer vowel. And then we also, somewhere around here, have a really long vowel. [SPEECH SOUND] There. I'll put those sound files on the website so that you can hear them. Basically, the word for "mouse" is [SPEECH SOUND].. The word for "charcoal" is [SPEECH SOUND].. And the word for pieces of "charcoal" is [SPEECH SOUND].. It also has tone. And it has a contrast between what are called creaky and breathy voice. And I'll play those sound clips for you as well. Actually, here, let's see if I can get this to work. No. No. No. No. No. No. I'm sorry. My theory is that this was not working because I didn't have it plugged in when I opened the program. Let's see whether that's true. Doo. Doo. Doo. Doo. Doo. Ah. [SPEECH SOUND] There we go. [SPEECH SOUND] So that's creaky voice. That's the word for "berries." Here's the word for "women." [SPEECH SOUND] [SPEECH SOUND]. So you do fancy things with your vocal cords when you speak this language. You either have to have-- it has different dialects of Dinka have different numbers of voice qualities. But in this dialect, there are two. There's creaky voice where your vocal cords are under a lot of tension, I guess, as you're speaking. [SPEECH SOUND] [VOWEL SOUND]. So that's the word for "berries." Here's the word for "women." [SPEECH SOUND] [SPEECH SOUND] where the voice is breathy. There's actually a more minimal pair in here somewhere, but I'm not finding it. Never mind. So that's what Dinka sounds like. It is lots of fun to try to study, partly because a lot of its morphology is tonal, involves changes to tone or to creaky or breathy voice, or to vowel length. So you can see, for example, that the word for charcoal forms its plural by lengthening the vowel, which is already long. And also making it low tone, so it changes from [SPEECH SOUND] to [SPEECH SOUND].. We lived in fear that we were missing half of the morphemes. [LAUGHING] So that's enough about Dinka phonology. Let me tell you something things about Dinka syntax. Dinka is V2 language. So you can see some examples here. If you want to say things like "Can will buy Bol some clothes in the town," there's a position at the beginning of the sentence that has to get filled by some phrase. It can be filled by "Can" or by "town," or by "clothes." You can pick any of these things and move them into the first position. That phrase is then followed by an auxiliary in all of these examples. If there isn't an auxiliary, then you get the verb in that position. And then you have the rest of the sentence. Dinka is unlike German in one or two ways. German doesn't have creaky and breathy voice, for one thing. But also, the verb in German, we saw, goes at the end of the verb phrase. The verb in Dinka goes earlier in the verb phrases. It's head initial. It's followed by its complement, with some complications, as we'll see. So you can say "Can will Bol buy clothes in the town," or you can say "Clothes will Cán Bol buy in the town," or you can say "town"-- you can take any of these phrases and move them into first position. Dinka is also different from German in that there's a morpheme in Dinka that tells you the number of the thing you've put in first position. So you can see that the auxiliary in the first and the third examples starts with ah. And in the second example, starts with ah. So the ah, in the second example, is indicating that the clothes are plural, that the thing in first position is plural. So German doesn't have that. There are other alternations that you can see there, which I won't try to talk about. Oh, well. So you can also see there's a difference between the auxiliary and the first example, which is [SPEECH SOUND]. That tells you that the thing in first position is the subject, whereas the auxiliary in the second and third examples is B is telling you that the thing in first position is not the subject. So there's all this morphology, and as you can see, it's tricky to keep track of. So it's V2. The last example demonstrates this. So it's ungrammatical to just have nothing in first position. You have to put something in first position. V2 language. Now there is another position in the Dinka clause that has to get occupied. So if you want to say in Dinka, "I saw a giraffe," you can say, [SPEAKING DINKA]. No, sorry, [SPEAKING DINKA],, so C is breathy. You've got "I," the word for "I," in first position here, it is satisfying the V2 requirement. And throughout, I'm going to make things in the V2 requirement, things in the specifier or CP, things satisfying that need for there to be something in first position, I'll make those things blue. There is another position that has to get occupied though. So you can say "I (auxiliary) giraffe saw." That's how you say "I saw a giraffe." You can't leave "giraffe" after the verb. If you-- basically, the generalization is that if there is a deep, an NP object, the NP object has to go before the verb. Everything else goes after the verb. But the NP object goes before the verb. So there's another position down there. I won't try to talk about how we figure out where that is in the tree. That won't matter for what I'm about to show you. But I want you to know about it, partly because it will make things slightly less confusing, I hope. If you have two objects, like you say things like, "I gave Ayen a book," you can choose either of them to go in first position. So you can say either, "I have a book given Ayen," or "I have Ayen given the book." One of those two objects has to go before "give." You can't leave them both after "give." I don't think I have this anywhere on the slide, but you also can't put them both before "give." So it's a V2 language. Something has to be in first position. And there's another position right before the verb that also has to get occupied, if there is a noun-phrase object. If there is no noun-phrase object, then nothing goes in that second position. That lower position, the red position. OK so far? This is all set up. So Dinka has two positions in the clause that absolutely have to be filled. There's the specifier of CP, so it's a V2 language. The blue position, the position at the beginning of every clause, has to get filled by some phrase. And Dinka is V2, so it can be filled by a variety of different phrases. And then there is another position that's right before the verb that also absolutely has to get filled. So if you say in Dinka, "Yaar told Ayen that Bol sent Deng to the cattle camp"-- The cattle camp is where the cattle are. Young people go to the cattle camp. Our consultant told us that the cattle camp is where young people get to know each other. So men and women mostly don't mix, but at the cattle camp, then they do. So Yaar and Ayen-- I'm trying to remember, "Yaar" and "Ayen" are women's names. And "Bol" and "Deng" are men's names. "Bol"-- I think "Bol" means "a person who is born after a set of twins." It's apparently a really common name. Twins, I guess, are really common among the Dinka. "Ayen" is a name for a color, just like a-- yeah, anyway. So here's a sentence. The main clause and the embedded clause both have their subjects in the first position. So you've got Yaar before the auxiliary in the main clause and you've got Bol before the auxiliary in the embedded clause. And in the main clause, you've got Ayen, the person who's being told, going before the verb "told," following that red position. And in the embedded clause, you've got Deng, the person who's being sent, preceding the verb, filling that red position. OK so far? So now a little bit of Dinka. Here's the thing, if you wh- extract-- so suppose you want to ask, "Who did Yaar tell Ayen that Bol sent to the cattle camp?"-- So here we've changed Deng into a wh- word, and we're going to move Deng into the main clause. So the object of "send" is going to be a wh- word and it's going to go in the main clause. The whole thing is going to be a question. Here's the observation. If you do that, all of those positions that normally absolutely have to be filled have to be empty. So "who" started off as the object of "send." And so that rightmost red position, the one that "Deng" used to be in, is empty. That's not so surprising. That's where "who" came from. But what's interesting is that the blue position right above that, the one before the auxiliary of the embedded clause, the one that's normally-- that was filled by "Bol" in the first example, can't be filled by "Bol." "Bol" has to not be there anymore. And similarly, the red position before "tell," the first red position in the second example there, the one that's an empty box, has to be empty. Normally it would be filled by "Ayen." That's what's it's being filled by in the first example. But if you wh- move across it, it has to be empty. So these positions that ordinarily absolutely, absolutely have to be filled, cannot leave them empty, suddenly they have to be empty if you're moving a wh- phrase across them. Well we can understand why if we're willing to believe in successive cyclic movement. So we just have to be willing to believe the wh- movement is obliged to move not in one mighty leap, but in a series of little hops. It has to land in all of these open positions. And that's why they can't be filled the way they normally would be if you're doing long wh- movement across them. So there's an explanation for why these positions are emptied out. I'll show you one other cool thing about Dinka and then I think we're done with Dinka. Dinka has a distinction between singular and plural versions of wh- words. So there's "yeŋà" which is the word for "who," and then there's "yèyíŋà," which is the word for "who" plural, "who all." English doesn't have a word for "who" plural, but plenty of languages do, and Dinka is one. If you ask a plural question, you put that "who" at the beginning in the spec of CP, of the clause that you're in. But you're required to leave behind the plural morpheme in the red positions, the positions before the verb. So "Who all did Bol see?" You've got this "who" plural in first position. But you've also got this "ké" that goes in the red position, the position that's before the verb. If you wh- extract something from the verb phrase, like an object, you have to leave this plural morpheme behind. So interesting fact, if you do a longer question than that, like "Who all do you think that Bol saw?"-- So you've got "who" plural at the beginning, and you've got "ké," in both of the red positions along the way. As you wh- move up the clause, it's like you leave behind this little-- this was plural thing, as you move along. So you get multiple "ké"s as you go up. "Ké" sort of resembles the third-person plural pronoun, which is [SPEAKING DINKA]. So that's the word for they. So we figure it's somehow related to this. So Dinka gives us some evidence for this idea, this wacky idea of Chomsky's from the '70's that wh- movement has to be successive cyclic, that it's not possible to just move in one big jump. You have to move in a bunch of little hops. So Dinka has a detector for landing in positions-- two detectors, actually. One is this "ké" thing. And the other is just whether certain positions are filled or empty. So these positions that normally have to be filled, have to empty themselves out if you're going to wh- move through them, by hypothesis, because the wh- word is landing in them and moving on the way up. One piece of evidence. There are zillions of pieces of evidence for this. This is one of my favorites. Now we are done with syntax. Let us have a moment of silence for syntax. Does anybody have any questions about syntax before we leave syntax behind and begin talking about something else? So if you are looking at the syllabus, there was going to be a day on dialect. I will move that day to the end of semantics. So we're now, more or less, caught up with the syllabus. We're going to spend some time doing semantics. And then, we will have a grab bag of topics after we're done with semantics, on various things. Any questions about syntax before we leave syntax behind? All right. Let us start in on semantics then. So we've done morphology, which is the building up of words out of morphemes. We've done syntax, which is the building of a sentences out of words. We're now going to do semantics, which is the study of how you make meanings out of smaller meanings. That's what semantics is all about. There's a going hypothesis, which is that-- something like if you completely understand the meanings of the various parts of the sentence, the various morphemes that combine, that there ought to be a simple set of rules for how to combine meanings, that would give you the meaning of the sentence. It's called-- this idea is called compositionality. It's the idea that we should pay a lot of attention to specifying very completely what particular morphemes mean. That if we'll do that, we can have simple rules to generate the meanings of sentences. So we're going to do some semantics. I'm going introduce you to semantics. And semantics is going to be like the other topics we've talked about in this class, in that, I hope to tell you enough about it to make it sound interesting. And then I will abandon you and let you study it on your own. Go take-- there is an undergrad intro syntax class-- semantics class that I encourage you to take, if you find the stuff that we're going to talk about interesting. There's lots of interesting work in semantics. So let's talk about some meaning relations, some classics, like meaning relations involving words. So we can say that words are synonyms, if they seem to mean the same thing. So words like "purchase" and "buy." And that words are antonyms if they seem to mean opposite things. This is an antiquated example. I should fix this. So "male" and "female." But there are fancier things to say about meaning. So let's consider the various ways someone might want to refer to me. So people usually say at the beginning of classes-- "It doesn't matter to me what you call me." Some people call me Professor Richards or call me Professor. Some people call me Norvin, which is my first name. If you can't remember my name, there are other things you might want to call me. [LAUGHING] All right. There's a variety of things that you might say about me, depending on the circumstances. And to say that these are all things that you can use to refer to me is to say that it's usually true if you have a sentence and it has one of those expressions in it, if you change just that expression to another expression, that the sentence, the meaning of the sentence won't change. In particular, if it was true before, it'll still be true. If it was false before, it'll still be false. So here's a sentence. "Professor Richards is from Alabama." That sentence is true. I am from Alabama. And if you were to switch out the red expression for any of the other red expressions, the sentence would still be true. So to say that these all mean the same thing-- to say that "Professor Richards" and "Norvin" and "that guy with the beard" can all refer to the same person, is to say that if the first sentence is true, then changing from one of these to another one is not going to change that. If it's false, it's not going to change that. Sound right? We have to be careful, because it's possible to get a little fancier with the meanings of words and phrases. People sometimes talk about what's called the "intension" and the "extension" of the meaning of a phrase or a word. Maybe the easiest way to think about this, at least for me, goes like this. The intension of a word, if you ask what the intension of a word is, what you're asking me for is the procedure that you will use. I'm going to give you examples of this in just a second. But the intension is the procedure that you will use to determine what that word or that phrase refers to. Whereas the extension of a word or phrase is the value, the thing you will get if you apply that procedure, the value of that function. Let me show you what the heck I'm talking about. So the phrase "the President of the United States" has an intension and an extension. The intension of that phrase, "the President of the United States" is the procedure that you should use to find out who that phrase refers to. So what you should do is find out who won the election, who won the most recent election for president. That's the intention of that phrase. The extension of that phrase is, well, what you'll get if you apply that procedure, which, right now, is Joe Biden. Where similarly, the phrase, "the current temperature" has an intension and an extension. The intension is the procedure we will use to find out what the current temperature is, like go look at the thermometer, go on to weather.com, or whatever. And then it has an extension, which is, well, whatever it is-- 45 degrees, let's say. So I said before, when we say that these phrases mean the same thing, to say that is to say that you could take a sentence that's true and switch one of the phrases and still have a sentence that's true. We have to be careful, though, about things like intensions and extensions. So when we say "The temperature is 45 degrees," or "That guy with the beard is Professor Richards," it's true that "that guy with the beard" and "Professor Richards" are mostly substitutable for each other. But we have to be careful with phrases like "the temperature" and "45 degrees." So the temperature has an intension and an extension. And its extension might be 45 degrees, but its intension isn't necessarily, which means that, for example, even if it's true that the temperature is rising, it doesn't follow that 45 degrees is rising, even if it's true that the temperature is 45 degrees and that the temperature is rising. So here's a place where it seems as though you can't substitute in one phrase for another. That's because rising is interacting, specifically, with the intension. It's telling you something about what you will see if you look at the thermometer. It's not telling you something about the extension of the phrase, the temperature. So there are places where you can confuse yourself with intensions and extensions. Let's extend the focus a little bit and talk about sentences. So that's the beginnings of some things we'll have to watch out for when we look at the meanings of phrases. When we start talking seriously about the meanings of sentences, there are some relations between sentences that it's going to be useful to keep track of. So one of them is entailment. So we say that a sentence A "entails" a sentence B, if whenever A is true, B must be true. So if John killed the ant, it has to be true that the ant is dead. That's what "kill" means. Yes? AUDIENCE: For this temperature example, we have the intensions and the extensions, and the intension is not fixed, right? NORVIN RICHARDS: Yeah. AUDIENCE: It is 45 degrees now, up to 50 degrees. NORVIN RICHARDS: Sure. AUDIENCE: Next day. NORVIN RICHARDS: Yeah. AUDIENCE: If we said "the winner of the previous election," would that be always substitutable for "Joe Biden"? NORVIN RICHARDS: So-- oh, "the winner"-- so let's see now. The phrase that-- let's go back to Joe Biden. Here he is. It's not always going to be true that "the winner of the previous election" is substitutable for "Joe Biden," because I can say things like "In 1945, the President of the United States declared war on Japan." And that phrase, and it's probably the wrong year, that phrase-- even if that sentence is true, I don't get to substitute that in for "Joe Biden." That's a case where it refers to a different person, to the person who won the election previous to that. Am I getting at your-- AUDIENCE: I feel like it could stickier than that? NORVIN RICHARDS: Probably. This is day one of semantics. But yes, you're right. This is mainly meant to make you wary about expressions like "the meaning of a phrase." So the meaning of a phrase has more than one dimension. You have to be careful about what kinds of predicates it's interacting with. I mean, similarly, if I say something like, "The President of the United States has always been male," that's true. It's also true of Joe Biden, as far as I know. But it's-- those aren't the same claim. Rising-- ants-- dead ants. Yes, so entailment-- one sentence entails another sentence, if whenever the first sentence is true, the second sentence has to be true. That's what entailment means. Similarly, to say "Norman is Don's nephew," is to say that "Don is Norman's uncle." The first of those sentences entails the second-- no, it doesn't. Actually, this is a lie! Neither of these sentences entails the other, unless you know that Norman and Don are both male. If you know that Norman and Don are both male, then they entail each other. Otherwise, not. I should fix that. I think this one does work. "If John is here, then Mary is here, and John is here." That first sentence, that long sentence, entails that "Mary is here." So if that first sentence is true, the second sentence is true. So these are entailment relations. Entailment relations are not about whether their sentences actually are true. So to say that sentence A entails sentence B is just to say, if you imagine that A is true, then B is also true. So "Joe Biden is a bachelor" entails that "Joe Biden is unmarried." They're both false, but if the first sentence were true, the second sentence would be true. Does the second sentence entail the first? AUDIENCE: What is a bachelor? NORVIN RICHARDS: An unmarried man. AUDIENCE: If Joe Biden is a man and unmarried he may be a bachelor, but Joe Biden could be a woman for all we know. NORVIN RICHARDS: Right. Yeah. That's an example. And actually, I think I misspoke when I said that a bachelor was an unmarried man. If his wife had died, he would be unmarried, I guess, but not a bachelor. I think a bachelor is someone who has never married. Yeah. Also false. So he's actually married. So entailment relations are not necessarily symmetrical. It's possible for A to entail B, and for B to not entail A. And entailment relations are about what life would be like if the first sentence were true. It's not about whether it is true. And we say that A and B are in an equivalence relation, if they entail each other. We can also say that they're synonymous. So "Mary ate the bagel" and "The bagel was eaten by Mary." The first sentence entails the second. And the second entails the first. These are equivalent sentences. And we say that they contradict each other if each entails that the other is false. So as long as "Noam" always refers to the same person, if Noam is here, that entails that it is not true that Noam is not here. And if Noam is not here, that entails that it is not true that Naom is here. I'm waiting for somebody to work hard on the meaning of "here," which I guess we could suppose he's halfway into the room or something. But forget about that stuff for a second. Let's pretend that people are either here or there, not here. So we say that A and B contradict each other, if they each entail the falsehood of the other. Yeah? AUDIENCE: I assume that it's possible for A-- is it possible for A to contradict B, but B not contradict A as false? NORVIN RICHARDS: So for A to entail that B is false, but for B to not entail that A is false? Yeah. So if you go back to this slide, because I feel as though if we arrange things, correctly-- yeah. So "If Joe Biden is a bachelor"... nope. What's a better example? So if-- so we're looking for a case of a subset relation where all-- AUDIENCE: "If the rectangle is a square..." NORVIN RICHARDS: Is a rectangle and square-- yeah, that's better, isn't it? Yeah, go ahead. Give us the rectangle and square example. AUDIENCE: If you say that something is not a rectangle, that doesn't necessarily mean that it isn't square. NORVIN RICHARDS: Yeah. AUDIENCE: If it isn't a square, that means it's definitely not rectangle. NORVIN RICHARDS: Yeah. OK. That's-- AUDIENCE: Other way around. NORVIN RICHARDS: OK, other way around. Good. Yes. AUDIENCE: I think that the logic rules are like if you have something like, "if p, then q," then it follows that "not q implies not p." But you have no idea whether q implies p or not. NORVIN RICHARDS: Yes. Yes. Yes. Yes, that's true. So that is what people say about [INAUDIBLE].. Yeah, you're right. Yes. OK, good. I'm going to move away from entailment and start talking about presupposition. Presupposition is another kind of relation between sentences. Again, it doesn't matter whether the sentences are true or not, you're being invited to imagine what life would be if they were true. So a sentence like "The present King of France is bald" presupposes that there is a present King of France. So this is different from entailment, in ways that I think I illustrate on the next slides. Yes. Crucially different from entitlement. So if I say "The present King of France is bald," or if I say "The present King of France is not bald," or if I say-- if I ask you, "Is the present King of France bald," all of those have this relation of presupposition to there is a present King of France. That is-- we can stick to the first two-- If I say "The present King of France is bald," or if I say "The present King of France is not bald," for either of those to be true, there has to be a present King of France. That's different from entailment. So for entailment, when we did things like "John killed an ant," and "The ant died," that's an entailment relation. If I say "John didn't kill an ant," well, if John didn't kill an ant, then the ant didn't die, at least, not necessarily. It might be alive. So to say that John killed an ant entails that the ant died. And if I negate the first sentence, I no longer entail what I used to entail. But if you negate a sentence that presupposes something, so if you add not-- if you go from "The present King France is bald" to "The present King of France is not bald," you still have the preposition-- the preposition-- the presuppositions that you used to have. You still presuppose-- it still has to be true that there is a present King of France. Yes? AUDIENCE: Let's say there is no current King of France. NORVIN RICHARDS: Yes, which is true, by the way, if anybody is learning about the world from this class. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Oh. So if there is no present King of France, ah. If there is no present King of France-- good point-- their truth value becomes difficult. This is sometimes called presupposition failure. So if I say a sentence that presupposes something false-- we're actually going to talk about this, I think, on the next slide-- your cognitive state becomes difficult. So this is a classic of-- it's used in politics and also in comedy. Unfortunate example from the Marx brothers-- the Marx brothers asked at some point, "Have you stopped beating your wife?" The point of the question is that there's no good answer. If you say, yes or no, you're accepting the presupposition that you used to beat your wife. That's what this question does, right? The corresponding statement has the same presupposition. So "He stopped smoking" presupposes that he used to smoke. And that means that if I presuppose something which is false, so if you ask me, have you stopped smoking, you're presupposing that I used to smoke. And if that's false, it's not enough for me to either say yes or no. I inherit your presupposition. I have to say something. I have to say something like, "Wait, I never smoked in the first place. Your presupposition is false"-- something like that. There's-- in the literature on Zen Buddhism, which I'm not an expert on, there's a kind of exchange between teachers and students that happens a lot where a teacher will ask a student the question. The teacher-- the student will ask something like, "Does the dog have the Buddha nature?" or something like that. And the teacher, by way of answer, will haul off and slap the student, or pour cold water on the student, or shout "mu!" and the student is enlightened. This is a kind of story you see a lot in the Zen Buddhism literature. I don't know a whole lot about Zen Buddhism, but I've always wondered whether "mu!" means you are making a false presupposition. Because it's an interesting fact about languages that we have words like "yes" or "no" that say things like "The statement that underlies your question is true or false." If I ask you "Is it raining?" you're asking me to evaluate the truth or falsehood of "It is raining?" And I'm supposed to say either "Yes, it's true that it is raining," or "No, it's false that it's raining." You ask me "Have you stopped smoking?" the statement that you're asking me to evaluate is "You have stopped smoking." And I have "yes" and "no" as my standard options. But I don't have an option that says there is a false presupposition. And most languages don't. It's not common for languages to have something to say that means you have made a false presupposition. And so it's interesting that language work this way. AUDIENCE: That's interesting, because I think in math, like in formal logic, if the-- if this, then that, then if this is just false, then it's the whole thing is true just by default. And the thing I'm wondering is like, does this mean that when you ask a question, you aren't actually asking, like "Have you stopped beating your wife?" You're asking, like, "Are all the presuppositions in this true, and is this also true?" NORVIN RICHARDS: So that's just it though, I'm not asking you whether the presuppositions are true. That's why if I ask you this question-- let's change it to "Have you stopped smoking?"-- If I ask you "Have you stopped smoking?" you can't say no and have that mean-- and have me interpret that as meaning you never smoked in the first place. You're absolutely right that if you were-- this is the kind of thing that robots and aliens get wrong, standardly, in science fiction stories, that false presuppositions lead to ungrammaticality, or something like that. There's something weird going on. Did I get it, the answer to your question? So presupposition failure, this case where I say something that has a false presupposition, it doesn't make the sentence false, it makes the sentence weird. Yeah, that's the expression, or the feeling that we have. I skipped some slides to get to this one. So let me go back a bit. So "The present King of France is bald" presupposes that there is a present King of France. It entails that the present King of France has no hair, because that's what it means. And so, if I say "The present King of France is not bald," that no longer entails that he has no hair. That's the difference between entailments and presuppositions. Entailments don't survive negation. If you add "not," then you lose the entailments that you used to have. But you still have the presuppositions that you used to have. Yeah? AUDIENCE: Did that say that entails the opposite? NORVIN RICHARDS: Say it again. AUDIENCE: Did you then just say that entailment is the opposite? Like it entails that he has hair? NORVIN RICHARDS: Oh, in this particular case? Yes. I think we're going to find examples where A entails B, But not A does not entail not B. So yeah, if something is a rectangle, then it-- no. If something is a square, then it is a rectangle. Did I get that right? But if something is not a square, that doesn't mean that it's not a rectangle. So yes, that's a case where A entails B, but not A doesn't entail not B. Yeah, good. Phew. And then presuppositions-- yeah, presuppositions are funny. So if depending on the presupposition-- so if we hear scratching at the door, I could say, "The cat is at the door," out of the blue. But there's a presupposition there, which is that "There is a cat." It's kind of like "The present King of France is bald" presupposes that there is a King of France. There's a process, what's called accommodating presuppositions, and this is why presuppositions are so handy in comedy and in politics. You make-- you say things that presuppose things, and then people accommodate your presuppositions sometimes without realizing it. So if I, out of the blue, say, "The cat is at the door," you didn't know before that I had a cat. I hadn't mentioned a cat before. But my sentence presupposes that there is a cat. And so you learn from my sentence that there is a cat and you add that to your knowledge base. If I were to instead say, "The giraffe is at the door," you'd probably be less peaceful about it. So if I make you presuppose something that's easy to presuppose, you do it without noticing it. It's not like you consciously go through a state of, if I say "The cat is at the door," it's not like you think to yourself, "Ah, he must have a cat. Write down in your mental notebook he must have a cat." Maybe that's happening on some very fast level, but you're not conscious of it. On the other hand, if I say "The giraffe is at the door," then you have the sensation of having learned something significant. Right, and similarly, "I regret having been born in 1857." If I say that and somebody objects, I say, OK, "I don't regret having been born in 1857." Either of these presupposes that I was born in 1857. And in order for somebody to object, they have to say "No, wait, surely you were not born in 1857." So that's entailments and presuppositions. There are also implicatures, which I think we have time for. The implicatures are things that you might tend to infer from hearing a sentence but that might not be true at all. So, for example, if I ask you a question, like "Can you open the window?" there's an implicature that I would like you to open the window. It's not entailed or presupposed. But you, when you hear me say that, you're likely to conclude that I wish that you would open the window. Or if I ask you, "Where's the salt?" there is a minimally cooperative answer to that in which you point, you know. But what I really want you to do is give me some salt. Or if I say I am 21, there is an implication that I am 21, that, I'm, in fact, exactly 21. 21 is my age. Or if I say "Mary ate some of the cookies," there's an implicature that there are some cookies left. But all of these things could be false, under the right circumstances. So if I ask you, "Can you open the window?" probably what I-- under most circumstances, what I want you to do is open the window. But maybe what I'm doing is making a study of physical fitness in MIT students and I want to know whether you're capable of opening the window. Or "Where's the salt?"-- maybe, in most circumstances, what I want you to do is hand me the salt, but maybe I'm making a map of the kitchen. Or there are circumstances in which I could say, I, personally, could say, I'm 21, namely, when I'm trying to go into a bar or something, and they tell you "You can't come in here unless you're 21." I could say "I'm 21." In fact, I'm 50. But what I mean is, I satisfy the requirement. Do people agree with that? There's another kind of example like this that people give. Imagine that the government is going to have a tax break for people with two children, two or more. And so, I claim this tax benefit on my taxes. And then someone from the government comes by and says, you claimed this benefit. You're claiming to have two children. I say, I do have two children. In fact, I have six. This is something that people seem to think about-- numbers, that numbers imply exactly the number, but they don't entail exactly the number. They just imply it very strongly. And under the right circumstances, you can convince yourself that what they really mean, what they actually entail is that number or more. These kinds of circumstances lead people to think that. Yeah? AUDIENCE: I wonder if in languages where instead of saying things like "I am 21," you're saying "I have 21 years," whether it be more likely that you accept like oh yeah, like if you're 50, you have 50, then you probably have 25. Like maybe, because we think of it as an identity thing [INAUDIBLE].. NORVIN RICHARDS: It would be interesting to study that. So people, when I said that I could say to the guy at the bar "I'm 21"-- yeah. I'm sorry, I'm stepping on what you're saying. You're absolutely right. There are languages out there, like Spanish, in which to say I'm 21, you say "I have 21 years," or something like that. Were people just being polite when I said to-- that I could say to the guy at the bar I'm 21? Did people want to object that would be a weird thing for me to say? Some people think that would be a weird thing for me to say. What should I say? I'm 50. Yeah? AUDIENCE: [INAUDIBLE] like, "I'm over 21." NORVIN RICHARDS: Oh, "I'm over 21." Suppose the guy has just said to me, you have to be 21 to get inside. First of all, notice, he can say that. He can say you have to be 21 to get inside. He doesn't mean we don't allow 22-year-olds, right? He means you have to be 21 or higher. And what I'm saying is I satisfy your requirement. But I take your point. All of you go to bars and try this out. No-- [LAUGHS] Oh, darn, I've been recorded saying that. This is bad. Oh yeah, and "Mary ate some of the cookies" implies that she didn't eat all of them, but maybe she did. So presuppositions are different from this. So I can say things like "I'm 21. In fact, I'm 50." Or "Mary ate some of the cookies. In fact, to be perfectly frank, she ate them all, just so you know." And that's, perhaps, a slightly odd way of telling you that, but I can tell you that. There's nothing too surprising about it. On the other hand, if I say "The King of France is bald, oh, and by the way, there is no King of France," then I am very weird. And I know. I have spent a lifetime researching ways to be very weird. This is one of the best ones. So saying something that presupposes-- saying A that presupposes B, and then, saying oh, and B is false, that's just like a very peculiar thing to do. So to summarize, we've talked about three kinds of relations between sentences. So a sentence can have entailments. So the entailments of a sentence A, have to be true if A is true. Presuppositions have to be true for A to be either true or false. So "The King of France is bald" presupposes that there is a King of France. So there has to be a King of France for it to be either true or false that "The King of France is bald." And then implicatures are things that yeah, could be true. You might be inclined to think that they're true if A is true, but they might be false. And if I say A and then say oh and by the way, this implicature is false, you're not distraught. Or to put it another way, if you're asking about one sentence P, is P an entailment, or a presupposition, or an implicature of A, the thing to ask yourself is, "Does P have to be true for A to be either true or false?" If so, then it's a presupposition. "Does P have to be true just if A is true?" If so, it's an entailment. So it's not a presupposition. It's an entailment. And then "Does P have to be true at all, or is it just something you might be inclined to think?" So then it's an implicature. I want to practice this for like two minutes. So if I say, "Bill isn't aware that Susan is pregnant," that's a sentence. And now here are two sentences that we can talk about the relationship between the first sentence, and the second, and the third. So "Bill isn't aware that Susan is pregnant." What's the relationship between that and the sentence "Susan is pregnant?" Is that presupposition or entailment or implicature? AUDIENCE: Presupposition. NORVIN RICHARDS: Presupposition, why? AUDIENCE: Well because whether Bill is aware of or not aware that Susan is pregnant, they both mean Susan is pregnant versus-- NORVIN RICHARDS: Right. That's the way to demonstrate that. So both the sentence-- for either "Bill isn't aware that Susan is pregnant" or "Bill is aware that Susan is pregnant" to be true, Susan has to be pregnant. It has to be true that Susan is pregnant. What about the second sentence, or last sentence, "You should tell Bill that Susan is pregnant." That's an implicature. So there could be circumstances under which I would say that first sentence on order to communicate to you that last sentence, but it doesn't have to be true. It's just the kind of thing you might conclude. So that's a presupposition. That's implicature. We will do much more of this next time. Have a good weekend. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 14_Photon_Interactions_with_Matter_I_Interaction_Methods_and_Gamma_Spectral_Identification.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: And to try something out a little real, I took a detector that you all have, as well, my cell phone. And this morning I went down with EHS to one of the very radioactive cobalt-60 sources, a 10 millicurie source. If you note, the source that we were playing around here was one microcurie, so this is a 10,000 times stronger source. It was actually able to show the difference between a background count of my phone. You shouldn't see much going on, except for that one malfunctioning pixel, because not much is going on. And when I put the phone over the source itself, things look a little different. You guys see all that digital noise, or snow in the video? Every one of those white flashes that you see is a gamma interaction with the semiconductor in the cell phone camera, with one or more pixels in your CCD, or charged couple device, or your CMOS detector, whichever one it happens to be. So I thought this was pretty cool. You can actually use your cell phone as a radiation detector. We're going to understand why, and what sort of radiation it could detect by virtue of its size and its composition today. Anyone ever try this before? You have? AUDIENCE: Yeah. MICHAEL SHORT: Cool. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Probably more intense than this if you were making neutrons, right? AUDIENCE: Yeah. MICHAEL SHORT: Awesome. OK, cool. So let's just first figure out, well, where is this radiation coming from? This is the link between the first part of the course and what we're going to be doing over the next month. As we've seen from the decay diagrams-- and I think I've harped on potassium-40 as an example for a reason-- it can undergo electron capture or positron release. And if it undergoes electron capture by this likely route it gives off a 1.461 MeV gamma ray as the only possible transition here. It also undergoes beta decay, which you don't want to forget about if you're calculating the activity of potassium-40. But today we're going to be focusing on what does this gamma ray actually do when it encounters matter? Or what are the possible things that can happen? I'm going to introduce them conceptually today, and we're going to go through the math of the cross sections and the energetics more tomorrow. So I'm doing a context first, theory second kind of approach. There's three main things that gamma rays will do in matter depending on their energy and the actual matter itself. There is one called the photoelectric effect, where a gamma ray simply ejects an electron from the nucleus. So let's say we've got our potassium-40 atom, have a bunch of electron shells-- I'm not going to draw all the electrons, but I'll draw a few inner and outer ones here. One of the things that the gamma ray can do is just eject that electron, have it come firing out. And the energy balance for that isn't that hard, because this gamma has some energy E-gamma. This electron had some binding energy, E-binding. And the kinetic energy, let's call it t, of the electron is simply the gamma ray in minus the binding energy back out. It's just however much energy it takes to remove that electron, that's what it takes. And so you end up with, if we go back to our banana spectrum, what we call a photo peak, or a photoelectric emission peak, right here. If you trace down this line, it's awfully close to 1,469 KeV, Or 1.46-- what was it? 1.461 MeV. It won't be exactly at that energy, because it does take a little bit of energy to remove that electron. Anyone have a guess on what order of magnitude that might be? Yeah. KeV all the way down to eV. So this photo peak will typically be extremely close, but not exactly equal, to the energy of the gamma ray coming out. For most detectors that don't have that good resolution, you can pretty much assume they'll be in the same channel, or the same energy bin, because you're detector will have some sort of resolution. It may have 1,024 or 2,048 channels that span the full energy range. And you might not be able to tell the difference between 1,460 MeV-- or 1.46 MeV-- and that minus a few eV. Potassium, in particular, has quite a small work function. And we'll get into why that is in a second. The next thing you can do is what's called Compton scattering, in the general case, which means that there is an electron here. A gamma ray comes in with E-gamma. And then it bounces off with some energy, E-prime-gamma. And then the electron goes off with some other kinetic energy. Then the last one is what's called pair production. Just like in the Q equation, if you have anything related to positrons you have to first create them. So pair production doesn't happen below about 1.022 MeV. And it happens with increasing probability as the energy of the photon goes up, kind of like in radioactive decay. There's a lot of parallels here. Is you can make an electron-positron pair at 1.022 MeV. It's just not very likely. And what we're going to find out tomorrow is why the most likely photon effect is shown in these different regions. Anyone have any idea? Why do you think the photoelectric effect would be most likely at low energies and high Z? You just have to give an intuitive guess. Yeah, Luke? AUDIENCE: Because the binding energy isn't very large for the outer electron shells. MICHAEL SHORT: That's right. So that explains the low energy idea. So it doesn't take very much. In fact, does anyone know what the minimum energy you need to make the photoelectric effect happen? Well, what's a typical order of magnitude for binding energy of the lowest, or the outermost, electron shell, or the lowest bound electron? AUDIENCE: Work function? MICHAEL SHORT: Yeah. That's called the work function. Anyone know an order of magnitude, guess what it is? It's in the single eV range. In some cases, it can even be slightly lower. And that, we're talking about visible light. So green light, even yellow light, can eject electrons via the photoelectric effect. And then the reason that goes more likely with higher and higher Z, we'll get into that when we look at the different cross sections of interaction. Pair production is much more likely at higher energies, because at higher energy you're more likely to create a positron. And in addition, pair production happens when a photon interacts with either the electron cloud, or the nucleus there. And that gets more and more likely, let's say, the denser the electron cloud is or the higher charge there is on the nucleus. So first, the simplest one, the photoelectric effect, this is actually what Einstein won the Nobel Prize for. Not E equals mc squared, which has been the bane of our existence for the last month. That's not what he got the Nobel Prize for. It was demonstration of the photoelectric effect, where if you start firing photons of an energy times Planck's constant times its frequency-- and in the next page I'll give you a quick photon math primer, in case you don't know what those quantities are-- there will be no photoelectric emission until you hit that work function. Yeah, like Julio was saying, that lowest bound electron energy. And then, emission will simply go up. And so this was demonstrated by applying a voltage to two different plates, two different metal plates, and then sending in light via this window and seeing when the current actually became non-zero. So the way you detect photoelectric emission is, if you've got electrons boiling off of one surface to the other, that's the movement of charge, and that's a current. And so you can measure a current with an ammeter. That's-- it actually is that simple. But very elegant experiment, back for the 1910s or 1920s. And as a quick primer on photon quantities, so you know what all of these different symbols mean, the photon energy we give as Planck's constant times its frequency. And I gave you Planck's constant right here, for reference. I do recommend that you guys try checking out all the units to make sure that they work out. Because if you ever forget, is it h times nu, or is hc over lambda, you can always check the units of your expression to make sure they come out to an energy. Which in SI units is what? AUDIENCE: Joules? MICHAEL SHORT: Joules. And then how about in these sorts of things, the most reduced SI units? AUDIENCE: eV? MICHAEL SHORT: Is what? AUDIENCE: eV? MICHAEL SHORT: eV is another unit of energy similar to the joule, like 1.6 times 10 to the minus 19 joules. But what about in meters, kg, seconds, other SI units? [INTERPOSING VOICES] MICHAEL SHORT: Yep. Kilogram meters squared per second squared. Indeed. Per meter squared, per second squared. Yeah. So just make sure sure you remember that, because if you're just looking for joules and you don't remember what a joule is, it's going to make unit balance kind of hard. And also, we can describe the momentum, or p, of the photon as Planck's constant over lambda, its wavelength. This is going to get real important when I ask you guys to do a derivation, much like the q equation one that we were doing before. But instead of me just doing it at the board and you copying it down, I want you guys to try working through it. And it's going to be another energy and momentum conservation thing, just like before. And this way you know what the energy is and you'll know what the momentum is. So now onto this work function. What is it, actually? There's a great paper by Michaelson-- I did not look up whether this is the Michaelson of Michaelson Interferometry, but I wouldn't be surprised. I'm going to check into that. But I did dig out this paper that actually shows the different patterns in the work functions of different elements. So what do you guys notice in terms of patterns, here? First of all, which elements are all the way to the left or have the lowest work function? AUDIENCE: Group one. MICHAEL SHORT: The what? AUDIENCE: The group one metals? MICHAEL SHORT: The group one metals, like sodium, lithium, potassium. Why do you think that is? AUDIENCE: I mean, they have-- I don't know. MICHAEL SHORT: They've got one electron in their outermost shell. So it looks like my potassium picture is not quite accurate. I'm going to draw another shell, and put one lone electron in that for accuracy. And so that electron is extremely unbound. That's the same reason that these elements are so chemically reactive. They want to ditch that electron to have a filled outer shell. So you may also expect the work function of noble gases to be extremely high. I don't know if any are plotted here, but you do see the next row over, like barium, strontium, calcium, magnesium, has a slightly higher work function. And as you move this way through the periodic table, to the left, until you hit the transition metal craziness, it follows a pretty regular pattern. And so you can have a good guess of what the work function of something will be depending on it's Z, and depending on which-- what is it? Which column it's in in the periodic table. Now, can anyone tell me, why do you think that work functions tend to increase with decreasing Z? Yeah. AUDIENCE: For smaller Z, the outermost electron is closer to the nucleus, so it's more tightly bound. MICHAEL SHORT: Indeed. Yep, exactly. For the smaller Z, that first or second shell is a hell of a lot closer to the nucleus. Even though it has a lower total charge in the nucleus, it's much more tightly bound, being much closer. So, like the outermost electron in caesium is quite far away, it does not feel as much coulomb attraction. Yeah, good point. So now onto Compton scattering. I'd say though the most difficult, conceptually, to understand the energetics. But the kinematics, or what actually physically happens, should look strikingly similar to what we've spent the last month on. Instead of two particles colliding, it's a photon colliding with an electron. Does anyone remember what we read in that first day of class, with the Chadwick paper? When he said, hey, maybe this quantum of energy is done in a process analogous to Compton scattering. Well, this is Compton scattering. His analogous process was maybe an electron hits a proton and something happens, which is not actually what happens. And in this case, you have a photon with energy h nu, and momentum h nu over c, striking an electron with rest mass m of electron c squared, or 0.511 MeV. And afterwards, the photon leaves at sum angle theta, and the electron leaves at sum angle phi. So I'm going to show you guys some of the Compton scattering energetics relations, like what is the wavelength shift. Which means that if this photon comes in with a certain wavelength, lambda, and it gives some of its energy to the electron, it comes out at a different wavelength. Is it going to be lower or higher wavelength, do you think? [INTERPOSING VOICES] MICHAEL SHORT: I heard a bit of both. So who says lower? AUDIENCE: Me. It can be wavelength. MICHAEL SHORT: So lower wavelength. Let's go back to the photon formula. Would a lower wavelength result in a lower or a higher photon energy? AUDIENCE: Higher. MICHAEL SHORT: OK. So in a Compton scatter, you start off with an electron kind of at rest. They're definitely not actually at rest. But compared to the energy of the photon, they're at rest enough. And then you give some of that energy to the electron. That energy has got to go down. And because these two quantities here are constants, the wavelength has got to increase. And hopefully this makes intuitive sense. The photon does what we call a redshift. It shifts closer to the red end of the visible spectrum than the blue end. And as you guys know, the high energy light in the visible spectrum hits towards the ultraviolet. That's what tans you, or gives you skin cancer. Red light, or infrared light, doesn't do much of anything at all. And so this is that on the extreme scale, where when we say redshift, we don't necessarily mean the photon is visible. But we do mean that it's shifting to a lower energy, or a higher wavelength. And so this wavelength shift is always going to be, well, is it going to be positive or negative? AUDIENCE: Positive, right? MICHAEL SHORT: Is what? AUDIENCE: Positive? MICHAEL SHORT: So you say the wavelength shift is going to be positive, which would mean an increase in wavelength? There you go. Yep. Because it's got to be-- it's got to lose energy. So I'm not going to go through the derivation of these, because I want you guys to go through the derivation. But we're going to do it in the exact same way, and I'll help kind of kick you off. Where, in this case, what are the three quantities we can conserve in every physics, everywhere? AUDIENCE: Mass, energy, and momentum? MICHAEL SHORT: Mass, energy, and momentum. The trick here is, what is the mass of the photon? Massless. So we've got energy and momentum. And we've got, let's say, some wavelength shift to determine, which is some change in energy. And we've got two angles to deal with. That's three unknowns. We need three equations. So we know that our initial energy coming in is going to be h nu plus approximately 0 becomes h nu bar, and the kinetic energy of the electron. So that's our energy conservation relation. And then what do we do about the momentum? What do we do last time? AUDIENCE: Split it into x and y. MICHAEL SHORT: Exactly. Split it up into x and y momentum. So the x momentum of the photon-- so I'll just label this as energy-- put the x momentum of the photon is h nu over c. And there was no x momentum of the electron to begin with. So then we're going to say this has outgoing momentum h nu prime over c times cosine theta plus whatever the electron momentum is, let's say m electron v, or root 2 m electron, T electron, cosine phi. And then how about the y momentum? What's the y momentum of the system at the beginning? AUDIENCE: Zero? MICHAEL SHORT: Yep. Nothing for the photon, nothing for the electron. And at the end we've got h nu over c sine theta minus, because it's in the negative y direction, momentum of the electron sine phi. I'm going to stop my part of the derivation there, because I don't want to steal away your whole homework problem. But you're going to start it out exactly in the same way as we were doing kinematics of two particle collisions. Because what is a particle, but a wave? They're all the same thing. It's modern physics. And then-- here's an interesting bit, here-- this maximum wavelength shift, if you want to figure out what is the-- well, look. Let's say, we call it the Compton wavelength. So if you were to decide what is the maximum wavelength shift, where would that be? At what angle? Did you have a question, or did you say what was the question? AUDIENCE: Yeah. MICHAEL SHORT: Oh, OK. Is what? AUDIENCE: Pi over 2. MICHAEL SHORT: Is that angle pi over 2? Because at that point cosine of pi over 2 equals zero. Yep. And so then you get this interesting result. No matter what the incoming energy of the photon is, you get this 0.238 MeV shift. And that's actually going to help explain, to jump back to our banana spectrum, what the distance is between our photo peak-- which is our photoelectric peak, which is pretty close to the energy of the photon-- and this part right here, which we call the Compton edge. Which would mean the maximum scattered energy of that photon, in this case. Or no, I'm sorry. That would be the maximum energy imparted to the electron. Almost misspoke there. And no matter what this energy of the photon is, that distance right there, that's the Compton wavelength. Interesting quirk of physics, huh? Because in the end, all that matters is if the angle's all the same, everything else cancels out and you just get a bunch of constants. Let me jump back to there. So now we'll take another look at our detector spectrum and start identifying some of these peaks. If you notice, this 0.238 MeV looks just like what it does on the graph. This is the kind of cool thing, like you guys threw some bananas in a detector last week. We got a spectrum yesterday morning, and how well-timed it was. We're actually going to start explaining it today. There's a whole lot more going on in this banana spectrum. Part of what we'll be explaining tomorrow is, why do you get this kind of bowl shape, this Compton bowl? And it turns out that there's a different probability of scattering at every different angle, or what we call a differential cross-section. A d theta over d omega. Because the probability of that photon scattering off in any direction is not equal. But if you know what direction the photon scatters off in, you know what energy it has, or you know what sort of energy it gives to the electron, because that's a one to one relation. And that's why you end up with this very smooth, almost cosine-ish looking kind of curve. You guys will actually get to derive that yourselves. So then onto the wavelength and energy shift. By looking at the electron recoil energy and this wavelength shift, from that you can actually get some sort of an energy shift. You can arrive at what is the recoil energy of that electron. And so here's one of the topics that's usually hard for folks to understand, but I want to stress it right now. When you send gamma rays into a detector-- let's draw an imaginary detector. In fact, let's draw the real one that we used in our banana counting experiment. So we had these copper walls, we had our bag of bananas, and we had our high purity germanium detector. Let's say we had a good shield on top, and then a good shield on the bottom. That right there is our active detector, and this banana is sending off gamma rays into that detector. The way a detector works is not by counting the energy of the gamma ray directly. It can't actually do that. In this germanium detector you've got a huge voltage applied across it. I think what Mike Ames actually said was the one we used was about 2000 volts. What happens here is, let's say-- I'll actually need three colors for this-- let's say a gamma ray comes in-- that's our gamma ray-- and interacts in the detector. That gamma ray will redshift-- let me get a redder color, because that'll be like physically accurate-- that gamma ray is going to hit an electron, go off at a different angle, and redshift, or get lower in wavelength. Meanwhile, that electron that it hit actually goes flying off and in the other direction, we're going to call it a hole. A defect missing one electron of some sort in this semiconductor. Normally if there was no voltage applied here these two would just find each other and annihilate, and you would have nothing. So what's to count? But by applying a gigantic voltage, let's say this voltage was really plus and this voltage was really minus, this electron keeps on moving and this hole keeps on moving to the electrode. Instead of recombining in the detector, they're actually then sent through where they're counted in some sort of ammeter or some sort of energy pulse counter. And what we're actually measuring is the recoil spectrum of the electrons that the photons make. You're not directly measuring photon energy, you're measuring the electron effects. Part of that is because chances are photons just go through everything. This is why I wasn't so worried this morning standing with my face over a 10 millicurie cobalt source. Because while I was getting billions of gammas per second flying through my brain, most of those billions just flew out the other side. It's literally in one ear, out the other. And so most of these gammas, if they interact at all, will escape again. The electrons, however, because they're charged and very low mass, have a very low range in the detector. So chances are the electrons that are made are going to stay there, unless you happen to make one, like, right here at that surface few atoms, and it escapes. That almost never happens. So forget that. This was, last year, a huge source of confusion to say, why are we seeing some of the other peaks that I'll be explaining in five or 10 minutes? Or, why aren't we seeing a 0.238 MeV peak? Because what you're seeing here is a photon losing it's energy minus 0.238 MeV in its maximum energy transfer, which is given to that electron. Then what actually happens next is this electron slams into a bunch of other ones, and that slams into a bunch of other ones until all the energy is lost in the detector. And all of those electrons get sucked into the electrode by this very high voltage. And then, the way you count the energy of an interaction is by how many electrons you get in a certain little amount of time. And so that's why, for example, for the photo peak, that's the kind of simplest reaction. A gamma goes in, a really high electron comes out, it smashes into tons of other electrons imparting all of its kinetic energy in the detector, which is all summed up in a nanosecond, or however long we collect for. And then we say that we saw an energy blip containing about 1,460 KeV of energy. It all came from that first gamma. And then it was all given to that first photo peak electron, which then slammed into a whole bunch of others. And they slammed into a bunch of others. And there's this what's called this ionization cascade, where a whole bunch of electrons make a whole bunch more until all of them have too little energy to ionize anything else. And then they're just collected. So that's what we mean by a pulse in a detector. It's not exactly an intuitive concept, because it's not like the gamma goes in and we just count its energy. There's more things that physically happen in here. But it's important for you guys to know, especially when we start to look at pair production. You guys remember some of this stuff from the positron annihilation spectroscopy? Well, the way we actually know that positron annihilation spectroscopy, or PAS, works is by measuring photons, or their eventual electron recoil, that can only be possible from this process. So as a quick review, let's say you had a positron source, like sodium-22, which naturally undergoes radioactive decay, and forms a positron along with a gamma ray from a very short isometric transition, or IT. Then that positron bounces around in the material until it reaches an electron. And once it hits that electron, because the positron-- let's see, the rest mass of the positron is the same as the rest mass of the electron, which is 0.511 MeV-- once the two of these combine, they annihilate, producing two 511 KeV or 0.511 MeV photons. And it's those photons at this exact energy all the time that really give it away. Because there's not many other processes that produce a huge amount of exactly that photon. Now that we've talked a little bit about momentum and energy conservation, does anybody know why you get what's called a blueshift or a redshift in positron annihilation spectroscopy? I'll give you a hint. It goes down to conserving the same thing that we're doing all the time. Yeah, Kristen? AUDIENCE: I was going to say, is that something to do with wavelengths? MICHAEL SHORT: You're close. I mean, technically you're close, if you treat electrons as waves, which you totally can. The electrons themselves do have a non-zero momentum as they're flying about in the atom or around the nucleus. And when a electron collides with a positron, if that electron already has some momentum associated with it, then the cell system of mass is not at rest. It's moving at some small speed. So this little minus delta energy and plus delta energy accounts for the initial momentum of the electron, which means not only can you tell from the lifetime how many electron looking defects there are, but you can probe electron momentum by looking at the slight energy changes as positrons collide with electrons. It's a really cool and powerful technique that uses only 22.01 concepts to probe matter at its deepest level. So what's happening on the atomic scale is, let's say a photon made a positron, and the positron bounces about what's called thermalizes, or just slows down via collisions, via other types of collisions that we'll go into soon, and then gets trapped in a defect, which is a relatively electron-poor place. But it doesn't mean there's no electrons, In every space everywhere, there's a probability that there's an electron there. In a defect not containing an atom that probability is lower, but not zero. And so by figuring out how long they last, and when those 511 KeV gammas are emitted, you can tell, let's say, what size defect that was. But now let's talk about what happens to these 511 KeV gammas. What evidence do we have that positron pair production actually exists? So before I reveal the labels, can anyone tell me what on this graph suggests that positrons are happening? And there's actually two things. What do you think? Yeah. AUDIENCE: There's a peak 511 KeV. MICHAEL SHORT: That's right. That's exactly right. There's a peak at 511 KeV that if I trace that up, I went one over. Yeah, right there. 511 KeV. Is it exactly 511 KeV? What do you guys think? So forget the fact that it came from a positron, let's say a 511 KeV gamma came in somewhere. How would it then release electrons to be counted? It then undergoes photoelectric emission. So the actual energy of this would be 511 KeV minus the work function of the material. This is one of those tricky questions that you might not even see it on the spectrum, but I want you to physically understand what happens here. It's not like 511 KeV positron photons magically get counted at 511 KeV. They then have to eject an electron, somehow. And for those electrons to be counted, they have to interact in exactly the same way as all the other electrons. There's no difference. What else? Oh, yeah. Luke, you have a question? LUKE: So, from the banana. Is a gamma ray coming from a banana, and then that gamma undergoes pair production? And then the gamma from the pair production-- I guess, where are the gammas coming from? MICHAEL SHORT: That was my next question to you. So let's think about this a little bit. We'll start off with gammas being emitted in all directions from our bag of banana ashes. Now the question is, where do these 511 KeV photons come from? If the gamma ray interacts with the detector by any mechanism including pair production, what are the possible things that could happen? There's three different scenarios. Let's pick a 511 KeV color. Well first of all, it might just undergo pair production. And it'll release two 511 KeV gammas. Let's see, those are our 511 KeV gammas. And because they're gammas, and they interact with almost nothing, they can get out. So you might end up-- your energy that you detect in the detector might be the energy of your gamma ray minus 2 times 511 MeV. This is what we refer to as double escape. Close the quotes like that. So if this gamma ray right here came in at 1,460 KeV, and the double escape peak-- if it undergoes pair production in the detector and both of those 511s elevens escape, because a lot of them do-- where would you expect there to be a double escape peak on this spectrum? AUDIENCE: Add that minus 11.022. MICHAEL SHORT: Yeah. Let's say, add that minus-- so we're at 1,460 minus 1.022. That comes out to about 450 KeV. 450 KeV right here, not much going on, is there? You're not going to see it in every detector. Especially the larger the detector is, the less likely both of those photons are going to escape. So this is where the concept of detector size can tell you whether or not you're going to see every peak that's physically happening. So in this case, the germanium detector is pretty big, it's pretty expensive. So chances are a lot of those 511 KeVs, even though they're produced in pairs, one of them didn't quite get out. Yeah, Luke? LUKE: So, the gamma from the banana goes into the detector. And then it produces pairs, and then those pairs are annihilated, and that produces the vibration. MICHAEL SHORT: That's right. That's right, why don't we write that down in steps for, let's call this pair production in the detector. So step one would be gamma emission. Step two would be electron-positron creation. Step three would be annihilation. Annihilation in the detector. And then step four would be somewhere between zero to two photons escape. So we have, actually, three scenarios that could happen here for pair production inside the detector. One of them we just described. Where pair production happens, you get annihilation in a very short time frame, like tens of picoseconds or hundreds of picoseconds. Both the gammas get out. That would have produced a 460 KeV peak, which it might be there. But I can't tell if that's a peak or if that's noise. So we don't really know. And chances are, the reason that didn't happen is because the detector was big. So our next possibility. What if one of those photons gets out and one of them doesn't? It then interacts via Compton scattering, or photoelectric effect, or any of the possible mechanisms. Then you'll end up with an energy counted equal to energy of the gamma minus only one of those things getting out. And we call that single escape. At what energy would that single escape peak be? Oh. AUDIENCE: 511? MICHAEL SHORT: It would be-- that peak would be at the energy the gamma 1,460, minus 511 KeV. So roughly 900 KeV or so. There we go, there it is. That's the second bit of evidence that there is pair production going on. Not only do you have a peak at 511 KeV, which we have not explained yet, but you also have the single escape peak, which is the energy of your gamma minus one escape from a 511 KeV photon. Yeah? LUKE: When you mean escape, do you mean escapes through the detector, or what is that? MICHAEL SHORT: Yes. I mean-- when I escape, I mean it escapes the detector and is no longer counted. So it might go and drop somewhere else, but your detector doesn't know it. So what's the third scenario that could happen? What if zero of these photons escape? What energy will you count? AUDIENCE: [INAUDIBLE]. MICHAEL SHORT: Exactly. So all that's going to happen is it's going to look like the photoelectric effect. In reality, you'll have slightly, slightly lower energy, because you have three work functions to subtract off from the three photons doing stuff. But I would count that as correct. It's going to look just like the photoelectric effect. First, you get that energy minus 1.022 MeV. And then both of those 511 KeV photons interact in the detector by, probably, photoelectric emission. And you just get another count at this channel, right here. Now the last question I want to ask you guys, where did this peak come from? Under what circumstance would the detector just count 511 KeV? I'll give you a hint. There's a reason I drew gammas going off in every direction. AUDIENCE: So they don't hit the detector. MICHAEL SHORT: Yeah. So most of the gammas don't hit the detector. But let's say you had a gamma that went into anything else, like the copper shielding, and it underwent pair production. And one of those gammas made it through the detector. I'm sorry, one of those photons made it through the detector. That's actually where these things are coming from. Because most of those gammas are not heading towards the detector. This is a very small, solid angle. But surrounding the rest of the detector is this really dense copper, and these high energy gammas in this relatively high Z material undergoes a lot of pair production, so it's firing out 511 KeV photons in all directions. And some of those enter the detector when nothing else enters the detector. And that's why you get this 511 KeV peak right here. So we haven't explained every peak on this graph. Does anybody have any ideas where-- what's that about? Or that? Or those? AUDIENCE: Cosmic radiation? MICHAEL SHORT: Yeah. Could be cosmic rays. That's probably what's contributing to a lot of the noise, here, as well as thermal noise in the detector. But what else haven't we accounted for? Now, to bring this a little more into reality, we ran an experiment where we burned bananas. We didn't put a potassium-40 certified source in. We put bananas in. What else could be going on? AUDIENCE: Other isotopes? MICHAEL SHORT: Other isotopes. That's right. But you can identify them quite easily, one, by checking to see where you expect the photo peak. So just from the decay diagram, you'll expect to see some interactions or photoelectric effect interactions, at these transition levels. Luckily, you know they're not due to potassium, because potassium has only got one of them. In addition, you should see some very similar features. So if you have a photo peak here, you would expect to see another Compton edge 0.238 MeV away-- and it's kind of hard to tell if it's going on, because that's a rather weak photo peak-- and you would expect, then, for the high energy gamma rays to see another single escape peak-- maybe right there-- and add to the 511 KeV peak, because those are all the same. So when you take the spectrum of a real thing, and you have to deconvolute it, or take it apart in terms of its constituent interactions, it's important to know what all these possible interactions are so that you can take them apart and say, start off with a photo peak, which should tell you what elements are there. And then you can subtract off the expected amount of Compton scattering, the expected amount of single escape peak, and then see what's left over, what other isotopes may there be that you haven't accounted for yet. The last thing I want us to try, as a mental exercise, is to draw two spectra. Let's say, this will be energy versus intensity. And for this I want you to imagine that, at first, your detector is very small. And then I want you to imagine that your detector is very large. And I'm going to keep this visible so you can have this as a mental model. If we had just one isotope, potassium-40, what do you think the spectra would look like for an extremely small detector and for an extremely large detector? So where do we start? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: That's right. And will there be any difference between the two? Probably not. So small detector, maybe a large detector is going to have a larger intensity. But for the same type of detector, you're going to have pretty much the same thing. What's next? AUDIENCE: Compton edge? MICHAEL SHORT: Compton edge. So there's going to be some energy that Compton scattering is going to start out, and then it's going to proceed up, thusly. Is there going to be any real effect of the detector size? Probably not, because as soon as you release that Compton electron, that electron slams into all the other ones, and nanometers or microns of material, and all the energy is collected. What's the real difference going to be? AUDIENCE: The 511 peak? MICHAEL SHORT: That's right. So the 511 peak, and the other associated ones. So for a really, really small detector we have the possibility for a double escape peak, a single escape peak, and just more photo peak. What's the most likely scenario? AUDIENCE: Double escape? MICHAEL SHORT: Double escape. So if we go down here, let's say if this difference is 1.022 to MeV, you would expect a larger double escape peak. And what would you expect your single escape peak to be? AUDIENCE: Smaller? MICHAEL SHORT: Significantly smaller. So let's say this difference right here is 511 KeV. How about for a large detector? AUDIENCE: Opposite. MICHAEL SHORT: Quite the opposite. You might expect a tiny or even nonexistent double escape peak, maybe a larger single escape peak. But most of the time you're just going to add on to your photo peak, depending on the resolution of the detector. Because in this case, for a small detector, if you have an interaction inside that volume chances are most of those 511s get out. For a large detector, chances are most of them stay in and undergo their own Compton scattering, or photo peak reactions. So let's say that all these detectors will also have a 511 KeV. We'll just mark that off. Let's just give them the same height. What else are we missing, if this is an ideal scenario with no noise? Well what are those five-- what can those 511 KeV photons do? Can they make pair production of their own? No. They're not high enough energy. In fact, they're half the required energy. Can they undergo photoelectric effect? Sure. That's probably where we're getting those 511s. Can they undergo Compton scattering? Why not? There's no minimum energy to scatter. So what you're going to end up with, then, is 238 KeV away. You should have another little Compton edge at a distance of 238 KeV away from the 511 KeV. Now in reality, you probably won't see it because you're going to have other X-rays, you'll have bremsstrahlung, which we'll talk about tomorrow, which is that breaking radiation. You'll have background radiation. And it might be hard to see, but technically it should be there, because any photon of any energy is going to have that same sort of Compton edge shape. The shape changes just a little bit, depending on the energy of the photon, but you're always going to have an edge. You're always going to have some sort of a bowl. Just how big the edge is compared to the bowl, well, we'll get to that tomorrow. So it's a little after 5 of 5 of. I think this is a good place to stop, because it's the full conceptual explanation of the ways that photons can interact with matter. So I want to ask you guys if you have any questions based on what we've done today? Yep? AUDIENCE: So 511 KeV's the rest mass of electron? MICHAEL SHORT: Yep. AUDIENCE: So that's just the energy you assume it has when you have pair production? MICHAEL SHORT: That's right. So the electron and the positron annihilate, turning their mass into energy. Since the rest mass of each of those is 511 KeV, the photons come off at 511 KeV. AUDIENCE: OK. Got it. MICHAEL SHORT: Yep? AUDIENCE: So when you say the electron and the positron annihilate, is the positron just a hole? MICHAEL SHORT: Ah, good question. The positron is not a hole. So like here, we were talking about an electron hole pair. A hole would be, let's say, an atom with a missing electron. A positron is a particle itself of antimatter that has the same mass, but the opposite charge, as the electron. And so every particle's got its antimatter component, like there are antiprotons and antineutrons, that if they find their regular matter selves, do annihilate. Yeah? AUDIENCE: If the detector doesn't pick up gamma rays directly, how do you measure-- like, why would a small detector see double escape? MICHAEL SHORT: So a small detector would see double escape, because at first-- let's say a gamma ray interacts and undergoes pair production. And so it's going to, let's say, create an electron-positron pair. And it's going to give them a whole lot of extra energy. So they're going to knock around and ionize things. And that's going to count up to the energy of the gamma minus MeV. Then, when they annihilate, if it's a small detector chances are those gammas just get out. We're going to be going over why soon, when we get into mass attenuation coefficients, or cross sections or interaction probabilities. But as the energy of a gamma goes up, it's interaction probability goes way down. And this is a fairly high energy photon, compared to, like, the easier KeV X-rays that you tend to see. So chances are, these photons get made from annihilation, but they don't stay in the detector. Then the bigger the detector is, the more mass there is in the way, and more likely they get counted. But all of this happens, well, at the speed of light. At least the photon part. And so it's so fast that the detector picks it up as that sum of all the different processes of energy in one time interval. Like I said, this is the harder stuff, because it's not direct. It's a multi-step process with different possibilities. But it's important to know where the single and double escape come from, where the 511s come from, which is outside the detector. Yes, have a question? AUDIENCE: Yes. Would you say the detector can-- the detector itself can measure the energy of a photon? Is the measurement of 511 KeV, is that due to the fact that it will hit an electron and cause the-- what is it called? MICHAEL SHORT: Like ionization cascade? Exactly. Yeah so if a 511 KeV photon enters the detector, the detector does not know until an electron interaction happens. So most of the photons that enter this detector leave the detector. That's why if you actually look at the banana stuff, which I'll pull up right now, at the efficiency, check out those values, there. Efficiency is in the realm of 10 to the minus 4 or 10 to the minus 3, which is to say that out of every 1,000 or 10,000 photons that enter the detector, one of them undergoes an electron interaction and the other 9,999 just goes screaming on through, and the detector does not know that they're there. The way that Mike Ames got these efficiencies is by putting a source of known activity in, calculating how many gammas the detector should have picked up, and taking that divided by the number that it actually picked up. And so that way, you know how many gammas really went in, and how many gammas it saw. And that's how you get the detector efficiency. And you will have to account for this when you do this on the homework problem. So the only quantities you're going to need is how many gammas that you get, what's the efficiency, and then back that out. So you'll have to calculate the activity of the bananas, and then figure out how much a banana weighs, and then you should be able to calculate the radioactivity of one banana in curies, or becquerels, or microcuries. It's all good. So good question. So it's three of, so I'm going to let you guys go. But I'll see you again tomorrow, and we'll review a little bit of this stuff. And we'll get into more of the math of the cross sections and why Compton scattering and pair production take up the energies that they do. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 33_LongTerm_Biological_Effects_of_Radiation_Statistics_Radiation_Risk.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: So as a quick review of all the different biological effects, we've pretty much taken it up to here. We've explained the physical and chemical stages of what happens when radiation interacts with mostly bags of water with some solutes in them, better known as organisms at dynamic equilibrium. Everything from the sort of femptosecond level, ionization of water almost certainly, because that's most of what biological things are, to the formation of many, many, many, many, many different radiolysis byproducts eventually that end up as just a few that we care about, the longer-lived radiolytic byproducts that will then diffuse away from the original damage cascades and go on to eat something else, likely DNA or something that you don't want to get oxidized or chemically changed. We talked a little bit about radiolysis in reactors and how you can actually measure it directly which was only done really a few years ago which is pretty cool. Just to remind you of this experiment, there's a tiny high-pressure cell of high-pressure, high-temperature water. There is a foil sample with a very thin region and protons firing through it, so that they both irradiate the sample and induce radiolysis in the water at the same time. And this way, you can test the effect of radiolysis in the water here versus just plain, old, high-pressure, high-temperature corrosion here. And the results are pretty striking, where you can clearly see the boundary where the proton beam was as well as the increased thickness of the oxide and corrosion layer formed when radiolysis is turned on, so to speak. We went through DNA damage, and we ended with pseudoscience. So I want to bring up a couple-- no, we don't have time for that. But we spent the last 15 minutes of class railing against pseudoscience and making sure that you check your facts, but we pointed out a number of things wrong with some of the studies. So aside from just that guy misreading everything on that entire blog, of the studies that you felt weren't very convincing, what do you remember about them? Some of those studies were totally fine, but some of them were not. AUDIENCE: The ones with particularly small sample sizes. MICHAEL SHORT: That's what I was hoping someone would say. Yeah, the case study of four women who got breast cancer in the pocket where they held their cell phones, four, right? Or in a study of 29 humans, 11 of them got brain tumors here. It's pretty easy to cherry pick small amounts of data. I did want to say that just because radio frequency photons aren't ionizing, doesn't mean they can't hurt you. If you've ever-- no, no one's ever been inside a microwave. I wonder if anyone's ever felt the effects of an external microwave being by something like this, the active denial system. One of my favorite weapons ever, because it doesn't actually permanently hurt anyone. It just heats up the outer layer of your skin. It fires these non-ionizing photons at RF frequency and effectively makes you feel like you're on fire. So if there's a whole mess of troops charging at you-- let's say at the DMZ from North and South Korea-- all you've got to do is turn on this thing, and they all think they're on fire, because their body is sending them signals that I'm on fire. And then you turn it off, and they're OK. So no loss of life, no permanent damage, a lot of maybe psychological, but whatever, you can't see that. AUDIENCE: Active denial. MICHAEL SHORT: Active denial system, great name for it, isn't it? Yeah, I think non-lethal weapons are really the way of the future is just make it unpleasant to engage in warfare, and people probably won't. But then no one has to get hurt, which is nice. But then onto the sources of data, because like Sarah said, sample size is everything, especially when you're trying to figure out, are small amounts of radiation bad for you? This simple question hasn't really been answered suitably yet, and that's because, thank god, we don't have enough people exposed to small but measurable amounts of radiation to draw meaningful conclusions from this data. I think that's a good thing, is if we were certain about whether small amounts of radiation, like one millisieverts, could cause cancer, then there would have been millions or billions of people exposed, and so it's kind of a good thing that they weren't. But the sources of this data, the first source was radium dial workers, like you may have heard of, the folks that would lick the paint brushes with glow in the dark radium watches. They ended up setting the first occupational limit for dose, because they were the first large group to be exposed to radiation in a controlled setting. Things like uranium miners, radon breathers, better known as us, but especially folks that smoke anything. Medical diagnostics, so anyone that gets a medical procedure, you can follow up with them to find out what's, let's say, the extra incidence of cancer and figure out, if you have a high-dose medical procedure, does it induce secondary cancer down the line? But like we said last time, down the line is the key here. I'd take a whole bunch of radiation now, if it was going to save my life now, and maybe make it messed up in 20 years. Because then you get 20 more years of life or however long you get. And then from accidents, survivors of the atomic bombs, not just the folks at the epicenter, but in the whole fallout regions and nearby, as well as nearby nuclear accidents and the criticality events like the demon core that you guys analyzed on the exam. Luckily, there aren't a lot of those, either. But they were pretty severe, the ones that got exposed. And speaking of accidents, has anyone ever heard of the Kyshtym disaster? This is the third-worst nuclear accident that we know of in history, after Chernobyl and Fukushima, and worse than Three Mile Island, because Three Mile Island was an almost accident. There was some partial melting of the core. There was almost no release of radioactivity. And the definition of a nuclear accident in the public sense is release of radioactivity. There's actually two quantities that folks in PRA, or Probabilistic Risk Analysis, are most interested in. Has anyone heard of these terms, CDF and LERF? Core Damage Frequency and Large Early Release Frequency. All the fancy probability fault trees and everything goes into calculating the probability that the core gets damaged. So that could be an accident in one right. Or the probability of a radioactivity release. And that is an accident. So if no one's ever heard of this, there's a city in Russia-- I don't know why it says Russland, maybe came from a different language-- called Kyshtym, where they had the Mayak nuclear and reprocessing plant. And there was a tank full of radioactive waste that was exploded. It was a chemical explosion, but full of strontium, all sorts of other radionuclides that blew up with about 100 tons worth of TNT, and ended up contaminating a rather large area with this plume called the-- I think it's called the East Chelyabinsk Radioactive Trace-- or the-- what is it? The south-- South-something Urals Radioactive Trace. And that area is still contaminated today, because the disaster was covered up, or rather wasn't-- nothing was said. These towns here, they didn't-- weren't actually towns back in 1957 when this happened. They were just given designations, like Chelyabinsk-40 or Chelyabinsk-65, because the largest nearby city was Chelyabinsk, and the villages nearby were just numbered. So that was just the post code for the secret nuclear city. The US had a few, Russia had something like 120. And they still have a lot of cities where entry is restricted, or it's still awfully difficult to go there. Like when you have to declare where in Russia are you going to get a visa, if you say one of these cities, there's going to be some questions. And this is where I'm going. AUDIENCE: To one of those cities. MICHAEL SHORT: Best possible logo for a conference being held in Siberia in February. Right on the end-- right on the edge in this town called Kyshtym, the nearest town to the Mayak plant. So I'll be taking my camera. I don't know if I'll be allowed to use it, but we're going to find out anyway. It's being held in a sanatorium. And does anyone know what a sanatorium is? Like, I'm honestly asking a question. I don't know what a sanatorium is or why the nuclear conference is being held there. But it should be pretty cool. So yeah, Siberia in February, right near the South Urals Radioactive Trace, should be interesting. Those of the first group of folks that were exposed were the people painting radium watch dials. And the reason radium was so damaging is because radium is in the same column of the periodic table as calcium. It's a bone-seeking element. So which of the tissues do you think would be most damaged by ingestion of radium? AUDIENCE: The bones. MICHAEL SHORT: Bones-- what part of the bone, specifically? AUDIENCE: The marrow. MICHAEL SHORT: The marrow. The rapidly dividing part of the bones. If you remember from the-- I don't have it on this presentation, but the relative tissue factors for different tissues, the hard part of the bone is a 0.01. It's basically like a nobody cares. Bone marrow, however, is a different story, because it's always rapidly dividing, producing red blood cells, platelets, lymphocytes. It's making your blood, the solid portion of your blood. And so it's a pretty important tissue. So you get radium-- anyone also know, what does radium tend to emit? Which kind of particles? AUDIENCE: Alphas. MICHAEL SHORT: If you have to take a guess-- yeah, alphas. It's a pretty heavy element. It emits alphas. And alphas have that radiation quality factor of 20, meaning alphas have very short range, but they're the most damaging type of radiation when ingested. So this was really bad news. There was a lot of incidents of illness and cancer from folks painting radium watch dials. And then the first data from bones after death, because there were a lot of those, established, how much radium were you allowed to get exposed to? And this came out to about 0.6 milligray per week. Anyone have any idea what that would be in millisieverts per week? With a quality factor of 20 and a bone marrow factor of about 0.12? AUDIENCE: 1.73? MICHAEL SHORT: Yeah. On the order of like singles of millisieverts per week. Not bad. Anyone know how much dose you typically get in a year in background? Yeah? AUDIENCE: A few. MICHAEL SHORT: A few millisieverts a year. Yeah. So this was the first occupational safety limit for radiation risk. It is-- actually, it comes out to larger than 50 millisieverts per year, which is what the normal occupational workers are allowed. How about you radiation workers? What's your limit? AUDIENCE: 5 rem. MICHAEL SHORT: 5 rem, which comes out to? AUDIENCE: Like 50 millisieverts-- MICHAEL SHORT: 50 millisieverts per year. OK, there you go. Large population sizes that do exist that get a whole lot of radiation, however, is anyone that smokes an anything, because when you take plant matter, which has a high surface area, concentrate it, so anything that it brings up from the roots in the soil, or that settles out on the leaves in the air gets concentrated in the dry fraction, and then gets burned and inhaled. A lot of those heavy metals that are radon byproducts and such are fairly reactive. They'll stick around in your tissues and give you a whole lot of alpha dose. So when you have populations of people who have or haven't smoked, you actually can figure out the number of extra attributable deaths to things like indoor radon, depending on if you live in a smoky atmosphere or not. And so to distinguish the types of biological effects that we're worried about, we can group these into two. There's short-term effects, which manifest themselves in hours, days, or weeks. We'll call that immediate. And then there's long-term effects, which tend to manifest in shortest, years, and longest, decades. So things like acute radiation sickness is due to rapid cell death of a few different kinds over time. And which kind depends on the route of exposure, the isotope, the type of radiation, and the total amount of dose to those tissues. And if you guys have all-- what are some of the symptoms of acute radiation sickness? Like, did anyone read what happened to the folks in the demon core? AUDIENCE: That their hair fell out. MICHAEL SHORT: Hair fell out. What else? AUDIENCE: They vomit. MICHAEL SHORT: Vomiting. AUDIENCE: Diarrhea. MICHAEL SHORT: Diarrhea. All the fun ones, yeah. Well, we'll explain why these sorts of things happen with acute radiation exposure. Now if you don't get that much radiation exposure, but you do get enough to mutate cells you have what's called delayed somatic effects, anything from cancer, to straight up mutations, to birth defects. Any sort of permanent and reproducible modification to a cell's DNA that can induce mutations. So let's first talk about the short-term effects because they're a little easier to understand. And because the doses were much higher, you don't need as much of a population size in order to figure out, did this affect-- did this amount of dose have an effect? So for things up to a quarter of a gray, pretty much nothing happens. That's quite a toasty dose. For gammas, that would be like getting five times your occupational yearly limit instantaneously. Yeah. This is not something you'd want to happen. But it's not going to cause any significant ill effects. Up to a gray, you'll start to see a few symptoms, like nausea and anorexia. They probably tend to go together. If you're feeling gastrointestinally horrible, you probably don't want to eat much. And you will see things like bone marrow damage, like we talked about with the radium workers. Fewer red and white blood cells, less platelets, also means easier to bleed. So a lot of the effects of radiation damage are not primary, they're secondary. Just like most radiation damage to cells itself is not damage to the DNA, but it's radiolysis of the water nearby the DNA, and eventual chemical migration to cause damage to the DNA chemically. In this case, it's not like radiation takes out your platelets. Radiation takes out the cells that create the platelets, the bone marrow. Meaning that platelets, if they live about three weeks, you'll tend to see a drop in platelet count when your production system gets lower. This should sound strikingly similar to series radioactive decay because the same equations can be used to model it. Let's say you have a normal, stable platelet count. Eh, I'm not going to get on the board. I told you guys we wouldn't get to derivey anymore. But you've got some source of platelets, which would be your bone marrow. And you've got some sink of platelets, which would be normal cell death. So let's say there's a half-life or a lifetime of platelets. If you kill a little bit of the source, then you'll see the sink start to decay. But the source will start to grow back over time from cell division. And you'll see the level pop back up again. And you can model it with the same first-order linear ordinary differential equations. Same ODEs as series radioactive decay, you can use to guess how many platelets you should have in your body at any time following a certain dose. 1 to 3 grays is when things get bad from-- go to bed from worse pretty quick. Nausea, anorexia, and infection-- tell me, why do you think infection results from radiation damage? Yeah. Let's hear everything, yeah. Front to back, let's hear it. AUDIENCE: I was saying the immune system is most likely compromised because of bone marrow being compromised. MICHAEL SHORT: Yep. The immune system's compromised. What else? AUDIENCE: You're-- AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Is everyone going to say the same thing? OK. I have another story. So I agree with you guys. But it also has to do with these platelets. Anytime anything happens to you ever, cells tend to die. You clap your hands, you probably kill a few cells. You bump into something, you probably kill a few cells. You swallow some metal shavings, you're going to kill a lot of cells. But your body has got mucous membranes, and all sorts of things, and platelets in order to repair that damage. All of a sudden, if your blood thins out, and can start leaking from different places, or it's a lot harder to repair like physical leaks in your body, bacteria can get in. So the normal amount of bacteria you're exposed to every day, which is enormous-- there's theorized to be something like 10 times as many bacteria cells in your body as human cells. They're all over the place. They're just a lot smaller. Well, they can get into places that they wouldn't normally get in. So what would normally be a pinprick in a simple immune response, with a suppressed immune system and a lower platelet count, becomes a much more dangerous thing. You could undergo something called sepsis. That's basically blood turning to sewage because you get a massive blood infection. This is, again let's say, a secondary or even a tertiary effect, but very real. Hematologic damage more severe-- hema refers to blood. That's basically saying the same thing. Recovery probable, though not assured. Why probable, and why not assured? AUDIENCE: Everybody reacts differently. MICHAEL SHORT: That's true. Everyone reacts differently. It also matters how much treatment you get. So if you get a crazy compromised immune system, we have hospitals, and sterile bubbles, and all sorts of things that you can be put in. But if you don't get to a hospital in time to reduce the onset of massive infection, that's what could happen. Then you go higher 3 to 6 gray, everything as above, plus diarrhea, depilation, hair loss, temporary sterility. Think the temporary sterility one's obvious. Why do you think diarrhea and hair loss would occur? AUDIENCE: Isn't it like the fact that your-- the cells of your like intestines-- then you can't like hold it in anymore because the damage. MICHAEL SHORT: Yeah. Exactly. The most sensitive cells are the ones that are rapidly dividing to make villi and stem cells in your intestines. Hair follicles, gonads, anything that's dividing all the time, is going to feel the effects of radiation damage much more severely. And barring any mutation, which may take a long time to manifest, the wrong damage to DNA, and the cell just can't divide. So it dies. And if those cells die, then that means that you can't uptake nutrition. And your body just flushes everything out in diarrhea. Fatalities will occur in the range of 3 and 1/2 gray without treatment. And this is what's called the typical LD50. Does anyone know what an LD50 is? AUDIENCE: The lethal dose. MICHAEL SHORT: The lethal dose for? AUDIENCE: 50% of the population. MICHAEL SHORT: Right. So about 50% of the people exposed to 3.5 gray will die. This doesn't take into account difference in treatment, difference in person, or everything, it's altogether. And I'll go into what an LD50 for different things is in a second. And then over 6 gray, you get immediate incapacitation. Hits the nervous system. You get so many cells leaking out that the chemical signals for your neurons, sodium, and potassium, and other ions. Well, if all your cells die and leak out, then all of a sudden you're flooded with the ions that are normally kept in a very careful equilibrium to signal. So you can actually get sudden unconsciousness in a matter of seconds to minutes from doses over 10 gray. So you just-- you just go out like a light, like that, and may not recover. What's an LD50? It's the-- it's whenever an effect gets an onset by 50% of the population. And there are different something-something 50 doses, depending on, let's say, whether something's therapeutic, toxic, or lethal. The example I like to give is selenium. Does anyone know anything about selenium in the diet? It's one of those trace minerals that you need to survive, but can also kill you. If you need to get, about on average, 5 micrograms of selenium in order to produce certain enzymes that keep things going in the body. 5 micrograms is not a lot. But you know that in order to have a little bit of selenium, it's got a therapeutic effect. Once you get around 5 micrograms, most people will see some sort of biological benefit. If you get 5 milligrams, starts to become toxic. And this is the case with pretty much anything. Vitamins-- anyone ever had-- this is probably going to be a no-- anyone ever eaten raw seal liver before? Or polar bear livers? I don't know. No one's gone up, way, way up north? AUDIENCE: Vitamin C. MICHAEL SHORT: Anyone-- do you know why? Or-- AUDIENCE: Because they have too much vitamin A-- MICHAEL SHORT: Indeed. Vitamin A, something that you need a whole lot to survive. It's so concentrated in the livers the seals and polar bears that if you were to just eat a polar bear liver, you would die of vitamin A poisoning. [INTERPOSING VOICES] AUDIENCE: --you'd be dead. MICHAEL SHORT: So I didn't hear all those things at once. One of the time. AUDIENCE: If anyone ever offers you that, you just say no. MICHAEL SHORT: Just take a little taste. You know, it's all about the amount, right? What were you guys saying? AUDIENCE: If we had eaten it, wouldn't we have died? MICHAEL SHORT: Well, no. It's not like you take a taste and you die. Again, it's all about the amount of exposure. One little taste is not going to flood your system with vitamin A. But you eat an entire polar bears liver, you're going to have a bad day. AUDIENCE: Why does it have so much vitamin A? MICHAEL SHORT: Wait, what? AUDIENCE: Why does it have so much vitamin A? MICHAEL SHORT: I don't know why polar bears have so much vitamin A. No idea, actually. But then beyond that, you can get lethal effects, where you might get sick from eating too much of something. But then again, you know-- anyone ever heard of the old hold your wee for a Wii contest? Where we found out really the LD50 of water? Yeah. So you drink way too much water without any other solutes, you deplete your body from electrolytes. And then you can also die. So I ran into this experience personally. I don't have to ask any of you guys. I went hiking with my dad in Nepal, in 2009, and the last vacation I've taken-- that's a long time ago. It's kind of cool. At MIT, it's fun enough here that I haven't felt like I've needed a vacation in, what, like seven years? I'm actually kind of taking one this year because I'm going somewhere for research and just sticking around. But we went hiking in Nepal I eat something I probably shouldn't. In fact, everyone eventually ate something they probably shouldn't. And I had what could be described as massive GI syndrome-- Delhi belly, whatever you wanted to call it. My brother likes to call it poop and mouth disease, because sanitation and stuff is not the best there. And so I was in a pretty bad state. And instead of drinking water to replenish all of the water that was leaving out of every direction from the body, drinking saltwater. We took tablets that had the same isotonic concentration of electrolytes, amino acids, as those being lost by the body, because when water goes in the body, everything osmotically equilibrates. If you take in lots of pure water, it will-- a little bit of sodium, potassium, other electrolytes will dissolve into that water. If it's going out in any direction, it's going to leave your body, depleting you of electrolytes. So I had seven wonderful days lying in bed, drinking about a liter of warm salt water every 15 minutes or so in order to maintain not just the water, but the electrolytes that your body needed. Freaky, huh? AUDIENCE: Sounds like a fun vacation. MICHAEL SHORT: Yeah, it was a great vacation. Is there any wonder why I don't want to take another one? If I go back there, I'm having nothing but Clif bars. It's hard to say no when folks that live up in the mountains offer you what little food they have, but you should really say no for your own safety. Anyway, yeah, there's an LD50 for water-- by any mechanism, from electrolyte depletion to-- there was a contest on the radio called hold your wee for a Wii. When the Nintendo Wii came out, they said, how much water could you drink without going to the bathroom? And someone's bladder exploded. AUDIENCE: Like literally exploded? MICHAEL SHORT: Yeah. That's what I heard. Either it would be a bladder explosion or an electrolyte depletion. So whatever the mechanism, the LD50 just tells if somebody-- if a population ingests a certain amount of something, or takes in a certain amount of radiation, how much will cause 50% to die. Or for much lower doses, 50% will see some therapeutic effect by any mechanism. It doesn't distinguish by mechanism. So the four phases of radiation damage, this is where all those Latin and bio roots really come in handy. The prodromal phase is the initial symptoms of exposure, which may or may not happen one to three days after exposure. For massive exposure, you're not going to see this, because you're not going to live one to three days. For very minor exposure, you may not even see these prodromal effects, like a drop in blood cell count, or GI syndrome, because the dose might not be severe enough for your body not to be able to cope with it. The latent phase this, is the tricky one. An apparent recovery from the prodromal systems. So getting a medium dose of radiation-- let's call, that like 2 to 5 gray-- will cause some nausea, vomiting, and headache. And then you get better. And then you get worse in the manifest illness phase, because a lot of the things that radiation will do can be immediate. If you suddenly cause the body to release serotonin and induce the vomiting reflex, that goes away once that serotonin is consumed or dealt with. I don't know how the body would deal with it. And you might think you're getting better. But the cells that divide rapidly have still incurred that damage. And you won't see that damage until they fail to divide in their normal amount of time. So things like GI syndrome and hair loss might not show up for a few days afterwards, because it's not like your hair will just instantly fall out, like there's some cell that is holding onto your hair follicle and then will just release it when irradiated. But those follicles won't continue to produce the keratin at the same rate, or in a different way, or I can't speak that intelligently about exactly the mechanism of hair loss, but it will take a little bit of time to get there. And the final phase is a binary. Do you recover or do you die? Could take days, to months, to years to figure that out. And these weren't in the reading, but I wanted to pull some much better tables about what happens in each of these phases as a function of radiation dose. So when does vomiting onset? There are actually patterns to be seen here. So for mild, it may take a couple of hours after exposure. You may not stimulate the immediate release of the hormones that induce vomiting. But then as the dose gets more and more severe, could be anywhere from hours to less than 10 minutes. So you can use the onset of things like vomiting, diarrhea, headache, loss of consciousness in severe cases, to gauge the amount of dose someone has absorbed in some unknown accident. Because it's not like if you're in some severe nuclear accident, and you don't happened to be wearing a very large range dosimeter, how do you know how much dose you've got? And how do you know how to treat the person? Time can be your best weapon there, because except for very lethal doses, where you could go unconscious in seconds or minutes, you've got some time-- hours to days-- to treat what happened. And if you can say, all right, I know the time of exposure, and I know the time of onset of headache, of diarrhea, of vomiting, you can figure out, roughly maybe within plus or minus a gray, how much dose you had and what to treat. There are probably smarter ways of doing this, but with nothing else, you've got time as a variable to help you figure this out. Why do you guys think that your body temperature would go up upon exposure to huge amounts of radiation? What's with the fever? What is the fever a response to? Or could it be a response to? AUDIENCE: Infection. MICHAEL SHORT: Infection. So any sort of sudden massive infection would mount an immune response. And that would cause a fever because you've got all sorts of cells doing things, expending energy, trying to rid your body of the infection. What else? That's OK, something for you to read up on for the-- not for the-- for the practice homework. The one that I can't make due because it's after the last day of classes. What about-- let's see-- headache, I don't think we've explained that well. We'll get into the diarrhea stuff. Let's go into the latent phase. What tends to happen? Well, looks like you get better, but blood work will tell you otherwise. And you can then tell how much dose you were exposed to after a certain amount of time by things like lymphocyte and granulocyte count, different immune system blood cells, also platelets, also all sorts of other things. You can tell by a drop in certain blood cell levels how much dose you've had. And you can sustain a certain drop in platelets and immune cells without any ill effects. Something like 30% to 40% of your platelets could go away, you're not a hemophiliac, temporarily. You're still going to be OK. You can form blood clots in result-- what is it-- response to a nosebleed or a bruise, and these things aren't going to be life-threatening. Diarrhea, for low doses, you don't really get any. So it looks like intestinal cells may be a little bit more robust than bone marrow. Except with really severe doses, you'll start to see that pop up once those cells fail to divide. Once, let's say, the existing villi die off, new ones don't replace them, and you lose your ability to uptake nutrition. And then depilation, hair loss. Beginning on day 15 or later, you might think you're out of the woods, and then all of a sudden, your hair starts to fall out. And that'll help tell you about how much dose you've had once again. And then the critical phase, what happens when things go from bad, to better, to worse? How quickly does this happen? You tend to get things like infections, more severe infections, disorientation. On longer times, like seven days, your platelet count-- it's pretty proportional to dose. Same thing with the number of lymphocytes, lower, and lower, and lower. And then the onset time is smaller and smaller. And then you can see the lethality of these different doses, depending on the person, the treatments, the susceptibility, any sort of pre-existing conditions, which you might not know. Do you have a question? AUDIENCE: Yeah. I was going to ask, for cancer patients, when you hear about them losing their hair, are they actually getting doses in like the 2 to 4 gray range? MICHAEL SHORT: Cancer, yeah. AUDIENCE: Because it's so concentrated. MICHAEL SHORT: Radiation doses are pretty intense. So the dose to the tumor, for example, in proton therapy, which is the only one I've really read about, can be in the kilogray level. But the idea there is you fry the tumor, you kill it soon. Like you go beyond the lethal dose for those cells, while inducing much less damage in the rest of the surrounding person. And that's the nice quirk of protons, is you can do that in a very narrow range. The straggle on 250 MeV proton beams is on the order of like, microns, less than millimeters, which is pretty cool. But a lot of the hair loss can come from the chemotherapy. Chemotherapy is better known as poison. It's just a poison that affects tumor cells slightly more strongly than the rest of your cells. But it is nasty stuff. And it's the chemo that can cause the hair loss as well. Yeah. So would you attribute the hair loss to radiation or to chemo? I would say chances are it's chemo, depending on where the tumor is. I mean, if you have a localized proton beam coming in to treat a tumor there or there, you're not going to get much hair loss up here. But chemo penetrates throughout the whole body. As far as if you're getting X-ray therapy of a brain tumor, that I don't know. I really haven't looked into that. So good question. And then the time and severity of these symptoms. Well, this is something I'd like you guys to read on your own, because it's tons of words on a screen. But it's something I suggest you read. It's not done that carefully in the reading, which is why I provided it here for you in the slides. And then going on to what these radiation symptoms mean, I wanted to translate a little bit of the Latin, Greek, whatever, roots into something you can understand. These hematopoietic symptoms-- anything to do with the blood, decreased platelets, immune suppression, all that kind of stuff. And the origin is the stem cell system in your bone marrow breaks down and you don't make as much of all the components of blood as you normally should. The gastrointestinal comes from the stem cells in the villi, those high surface area structures in your intestines that absorb the nutrition, which are also normally covered in a thick layer of mucus to keep all the bacteria from getting out. Because nutrients, like, let's say, minerals or small proteins, are a hell of a lot smaller than bacteria, they can diffuse or transport through the mucus much faster. So you can uptake the nutrition and not let the bad stuff in. And the neuro or cerebrovascular stuff is straight up blasting of endothelial cells. Your, let's say-- yeah-- I think that's skin cells. A term called edema, which is fluid leakage. Has anyone ever seen pictures of folks with massive edema in the legs? Like, folks that, let's say, haven't gotten out of bed for years and their legs swell up like this? That's just fluid leaking into the intracellular spaces. I'd say take a look. I don't think I'm going to show pictures of it because it's kind of nasty on the screen. But if you want to know what edema looks like, then I suggest you look it up. There's plenty of horrific stuff on Google Images. And so what happens in these hematopoietic cells? About 1 gray can knock out about a third of your bone marrow cells, and that's actually OK, because those surviving cells are redividing quite quickly. And that means that you won't have that much of a drop in blood cells, because let's say you kill off a bunch of the bone marrow cells , but they redivide in a shorter lifetime than, let's say, the red blood cells or platelets live. You're not going to see that much of a dip in the blood cell levels, which are ultimately your main line of defense against sepsis. Things like destruction of bone marrow, yeah that would-- that would be a bad thing. There's a whole lot of words here. I'd say this is better for you to read. I want to go through an explanation of some pictures of what tends to happen to, in this case, mouse bone marrow tissue after a lethal dose, 9.5 gray. That's what it looks like beforehand. That's what you're left with, is very, very few cells. So that would be definitely what a lethal dose looks like, because the ability to make all the things that bone marrow makes has been almost eliminated in this tissue. So a visual of what these sorts of things look like. For the gastrointestinal systems-- I'm going to skip right ahead and show you what healthy and irradiated villi tend to look like. So I've been-- does anyone not know what I mean when I say villi? OK, good. So the little high surface area structures in your intestine that are normally great absorbers of nutrition, mostly due to their surface area, but also due to their structure and their biological function. And you tend to kill those off with a fair bit of radiation. So this is what it looks like after four days, and seven days, and then 12 days. Things can recover. As long as you don't kill all the cells, they will divide and they will reconquer. And if the organism can live long enough to allow for that natural healing to take place, then you can survive an acute dose of radiation. So when we talk about why do you need hospital treatment, it's basically to stand in for your body's normal functions while your body regenerates those functions. But for extremely severe cases-- let's go back to that table of how many, let's say, leukocytes, or what not you have-- or lymphocytes-- if you get down to the zero level, you've completely knocked out your body's ability to produce those. You might have a few cells left here or there. But at that point, there's not much anyone can do but make you comfortable. And then in this case, I think this was a human one-- yeah, OK. So a healthy intestine from a human. It's got a rather small whatever that part is in the submucosa level. Lots of villi, lots of surface area. After radiation damage, when you have massive cell death, notice that the structures out here are pretty much gone. And there's a lot of scarring or-- what's the word that they use? Severe fibrosis. Why would your body make scar tissue in response to radiation damage? So anytime your body senses that a whole lot of cells are dying, it's going to respond by attempting to repair. So like if you get, let's say, a small bit of surgery done, you could be left with some scar tissue. That's cells that have died, and when those cell contents leak out, they signal to the nearby cells, fix something. I can't speak any more intelligently about that, but the body does. And scar tissue is not what you want in your intestines because that interferes with-- what is it? Nutrition uptake as well as killing the structures that are doing that uptake to begin with. Then there's the neurovascular stuff. Massive cell death from a huge amount of absorbed energy can just cause those cells to die and leak out, causing a lot of edema. That can cause a drop in blood pressure, which is also not good for you. This could be part of what leads to some of the unconsciousness. If you have a drop in blood pressure due to any reason, then that can make you go unconscious. And there's pretty much not a prodromal or a latent phase. If you hit the neurovascular syndrome, you're pretty much going to go to the critical phase right away, within seconds, minutes, or small number of hours. Here's another question, why the skin lesions? Because mature skin cells live about three weeks. If you kill off the skin cells in the dividing layer, and you don't reform new ones, and those skin cells die, you end up with the grossest word in this class, moist desquamation. It kind of sounds like what it is. That's like sloughing off of skin and leaving open sores because you don't have the ability to regenerate that skin, which is normally your first line of defense to everything, and you've got fluid leaking out. And it's just-- yeah-- it's moist desquamation. Why the vomiting? Well, this question hasn't been fully answered yet. As far as back when I've looked at the literature around to 2011, there is a hypothesis that intestinal cells will secrete serotonin in certain conditions, including when they start dying, which would then stimulate a center in your medulla, the sort of automatic reflex center of the brain, to induce vomiting. Why might this be a good thing? We're not talking about radiation, but why would you want to stimulate this vomiting reflex? AUDIENCE: In case whatever's going wrong is because of something you ate. MICHAEL SHORT: Yeah. So let's say you eat a wet aged steak. You know, something that's left out on the table, or in the fridge, or let's say, behind the fridge, or left to marinate in the sun. And you eat it, and those bacteria start killing everything. If those cells in your intestine die, they've got to send some signal far away to the brain to tell you to get everything out of the stomach. And that's what happens. So the body has developed these long distance hormonal signaling mechanisms to say, something is going wrong, expel everything, because it's probably bad for you. So radiation damage to these cells, which will kill them, may trigger the same effects. If those cells have little pockets or organelles that contain these hormones and cause instantaneous secretion by cell death, that might do the same thing, too. But as far as this paper, it's a hypothesis. It's not necessarily proven. But it does correlate inversely with the amount of time to vomiting, in terms of dose and time to vomiting. So that much we do know. And then onto the long-term effects. There's two that are really important, is cancer risk and birth defect risk. You won't tend to see this happen, despite popular media. But you will see a lot of bad stuff happen. These are extremely difficult to wrap our heads around. And the reason for that is the population size required in order to do a proper study with proper statistics, and give confidence to the saying, let's say, a dose-- in order to, let's say, a dose of 1 milligray would have some amount of excess risk, you'd need to expose 61.8 million people, plus a similarly sized control to distinguish whether or not 0.1 milligray has an additional amount of risk. So let's say for gammas to whole body, what's 0.1 milligray in terms of increased risk dose in sieverts-- sorry-- 1 milligray? 1 millisievert. Tissue factor is 1, gamma radiation quality factor is 1, that's 1 millisievert. That's 20 years of exposure-- or 20 years of allowed exposure at the same time. Or 10 years of exposure at the same time, where 100 microsievert exposure at once has been said to say, maybe that's the onset of detectable amount of damage. Pretty difficult, and our sources of data for these doses are a lot smaller-- with the exception of very high irradiations than we need to make any real conclusions. The largest sample size we have besides smokers would be atomic bomb survivors. So folks have followed all of the survivors of the Hiroshima and Nagasaki bombings. Not just the people nearby, but in the surrounding countryside. And tried to follow, how many excess cancers were there as a result of the radiation? For anyone exposed within 3 kilometers for less than 5 milligray, you can attribute, basically, either one or none. So by following this group of people and finding out how many of them got cancer compared to control groups, you can try and figure out, how much extra cancer was due to radiation? And to graph this-- this is actually in the-- I think the ICRP publication, graphing the amount of relative risk, or to use the words from the last studies we saw, the Odds Ratio-- the OR-- of getting cancer, an odds ratio of 1 means exactly the same amount of risk with versus without the radiation. And the actual raw data points are plotted here. And there's a couple of lines drawn through here. And this is the source of a lot of the controversy behind radiation damaged nowadays. The black line is the LNT, or the Linear No-Threshold model, which is a hypothesis that says every amount of radiation is bad, and it is linear with dose. I, for one, don't believe this model. This is, to me, a fear-based model. It's certainly easy to make policy based on this, because you can-- I think your average congressman can understand a linear graph. Not sure whether they could understand p-values and statistics. But they're-- they don't have to. It's what they ask scientists to testify about. When you look at the actual data, there's this kind of funky shaped line along with plus or minus 1 sigma error bars. It doesn't really show a linear threshold, does it? It actually looks like it might be super linear for very small doses. And then it tails off, and then it picks up again. But this right here is a zoom-in of this data rich area of the graph. It actually looks like for really high doses, it might be a little super linear again, where things get much, much worse. Hopefully you don't have anyone exposed to, let's say, 2 gray of dose, but the real controversy is here, in the small dose region. We don't really know enough to say whether very small doses are hurtful or not-- or harmful or not. In fact, they might even be helpful. So you guys, I think, were the first class that I-- no-- I had you last year-- no. You guys remember the answer to what is the idea that a little bit of radiation might be good for you? From the cash class? Anyone remember what that's called? AUDIENCE: Hormesis? MICHAEL SHORT: Hormesis, yeah. This idea that a little bit of something bad could actually be good for you. This is also a theory, and to my knowledge, has not been proven to be true. But it is evident in some other studies, along with different fields of research beyond radiation. For example, there had been an experiment where rats were kept in shielded lead boxes as opposed to just out on the bench where they got less radiation. And the rats that had less radiation had less incidence of cancer. However, it's extremely difficult to remove all other confounding variables from this data. And that's the trick there, is when trying to tease out, are small amounts of radiation bad for you? You also have to tease out confounding variables or other things that might be obscuring your data. Why the hormetic effect? So what are some of the ideas behind why hormesis might happen? So there are some theories, and some controlled studies showing that if you irradiate cells very lightly, they mount an immune response. There are proteins and things circulating throughout your cells that are there to repair DNA. And if you stimulate the production of those repair mechanisms, then the repair will be more rapid given the same amount of stimulus. So in this case there are-- let's see-- I'm just going to say proteins-- I can't say anything more-- that will actually travel along DNA, looking for certain types of kinks or breaks and repair them. If those repairs happen before cell division, then the mutation is avoided. If you have more of those repair mechanisms, it takes a little bit more energy to make them, but you also have less of a chance of a mutation manifesting itself past division number one. So this is kind at the cellular level idea why might hormesis be true, because you stimulate your body's ability to defend against this kind of stuff. And so there you go. Cells can actually signal each other. So let's say a cell undergoes DNA damage and can't divide. These cells can actually send what they call kill signals in the intercelluar space to the nearby cells, stimulating them to mount some sort of response. Either release something or divide more to make up for the dead cells, which could be good or which could be bad. If you make more of these DNA repair mechanisms, that's probably a good thing. If you stimulate the nearby cells to divide faster, well what are the two things that could happen? Yeah? AUDIENCE: More mutations. MICHAEL SHORT: Why do you say more mutations? AUDIENCE: Well, I mean, If you have cells that were in a radiation environment, that are exposed to that radiation you're dividing faster, each division has a certain chance of mutation. More divisions overall means more mutations [INAUDIBLE] MICHAEL SHORT: Exactly, yeah. If there are cells nearby that have been exposed or mutated and you induce faster division, you may induce faster incidence of the-- what is it-- of manifestation of this mutation. But also, if-- let's say, a few cells die and the other ones divide to make up-- take up the slack, that might be a good thing. This is a normal way that you repair injury, is upon cell death, the cells nearby divide faster, fill in the gaps, and try and repair the tissue. So it's both a good thing and can be a bad thing, depending on what the nearby cells have been exposed to. And so there's also this-- they call that the bystander effect, where, interestingly, you can have biological effects in cells that receive no radiation exposure if they're near cells that have received radiation exposure. There are some awesome experiments showing this. We had one here, back when we had a professor that did medical physics. She had created an accelerator with a microbeam, like a micron-wide proton beam, when you could irradiate single cells and watch what happens to the cells nearby. So to study in a controlled way, this bystander effect. So if you irradiate one cell on a glass slide, how do the other ones respond? So you know which one was irradiated and you can watch what happens to the other ones-- pretty slick. That accelerator, actually, parts of that live on the DANTE proton accelerator that we now use for physics and things. But a lot of the parts from those machines are still here, just the microbeams and the cell parts aren't. And then I highlighted a few of these passages in sort of the DNA damage bystander effect. One of the reasons is when cells nearby divide, they scale up their metabolism. They have to burn more energy in order to undergo division faster. And that can undergo what's called oxidative metabolism. Cells can produce energy aerobically or anaerobically. When you're dividing very quickly, all of a sudden, you start burning more oxygen to divide faster, to do whatever you have to do. And that oxidative metabolism also creates free radicals just from normal wear and tear to your cells. And those oxidative byproducts may also induce mutations in the same primary way that radiation does. Radiation does hydrolysis, makes oxidative species that damage DNA. Chemical oxidative metabolism can produce the same sorts of things that can damage DNA in the same way, just a different initial effect. I'm going to stop here, even though we only have a few slides to go, because it's exactly five of. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 20_How_Nuclear_Energy_Works.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: So today, I wanted to give you some context for why we're learning about all the neutron stuff and go over all the reactor types that, until this year, the first time you learned about the non-light water reactors at MIT was once you left MIT. I remember that as an undergrad as well. The only exposure we had to non-light water reactors is in our design course, because we decided to design one. So I wanted to show you guys all the different types of reactors that are out there, how they work, and start generating and marinating in all the different variables and nomenclature that we'll use to develop the neutron transport and neutron diffusion equations. The nice part is now, until quiz two, you can pretty much forget about the concept of charge. So 8.02 can go back on the shelf, because every interaction we do here is neutral, charge neutral. There'll be radioactive decays that are not the case. But everything neutron is neutral. It doesn't mean it's going to be simple. It's just going to be different. But in the meantime, today is not going to be particularly intense, but I do want to show you where we're going. And this goes with the pedagogical switch that we made in this department starting this year. And you guys are the first trial of this. We're switching to context first and theory second. I personally find it much more interesting to study the theory of something for which I know the application exists. Who here would agree? Just about actually everybody. OK. Yeah. That's what I thought too. So in the end, we had arguments amongst the faculty about, well, you have to learn the theory to understand the application. And that works really well when you say it behind the closed office door by yourself. But the fact is, I'm in it for-- yeah. I'm in it for maximum subject matter retention, so in whatever order that works the best. And sounds like, for you guys, this works the best. That's what we're doing with the whole undergrad curriculum, not just this class. So let's launch into all the different methods of making nuclear power, both fission and fusion, and to switch gears since we're dealing with neutrons. I don't know what happened with the-- oh, there we go. The idea here is that neutrons hit things like uranium and plutonium, the fissile isotopes that you guys saw on the exam, and caused the release of other neutrons. And as we come up with these variables, I'm going to start laying them out here. It might take more than a board to fill them all. And I'll warn you ahead of time, this is the only time in this course that we're going to have V and nu, the Greek letter nu, on the board at the same time. And I'm going to make it really obvious which one is nu and which one is V. So this parameter that describes how many neutrons come out from each fission reaction we refer to as nu, or the average number you'll see in the data tables as nu bar. And so as we come up with these sorts of things, I will start going over them. And the idea here is that each uranium-235, or plutonium, or whatever nucleus begets two to three neutrons, the exact number for which is still under a hot debate, and I don't think it actually matters, will make a couple of fission products that take away most of the heat of the nuclear reaction. And I just want to stop there, even though you know there's going to be a chain reaction. And that's what makes nuclear power happen. And we can go over the timeline of what actually happens in fission and what kind of a nuclear reaction it really is. So in this case, this is a reaction where a neutron is heading towards, this time we're actually going to give it a label, a uranium-235 nucleus. And it very temporarily, like I showed you yesterday, forms a compound nucleus, some sort of large excited nucleus that lasts for about 10 to the minus 14 seconds. So it doesn't instantly fizz apart. There's actually a neutron absorption event, some sort of nuclear instability, at which point your two fission products break off. Notice, you don't have-- let's call them fission product one and fission product two. Notice, you don't quite have any neutrons yet. Neutron production is not instantaneous for the following reason. If you remember back to nuclear stability, when we plotted, let's say, I think that was maybe Z and this was N. And I think this was a homework problem. And you had to come up with some sort of curve of best fit for the most stable combination of NZ for a nucleus. It was not a straight line. It was something on the order of like N equals-- what is it? --1.0055Z plus some constant, something with a rather small slope. Well, if you have a heavy nucleus, like uranium-235, and you split it apart evenly, let's just pretend it splits evenly for now, you're kind of splitting that nucleus along a rather unstable line. And, as you saw in the semi-empirical mass formula, a little bit of instability goes a really long way towards making the nucleus extremely unstable. So let's say you'd make a couple of fission products that just cleaved that nucleus with the same proportion of protons and neutrons. How would they decay? Or how can they decay? There's a couple different ways. What do you guys think? AUDIENCE: It can emit neutrons. MICHAEL SHORT: It can emit neutrons if it's really unstable, at which point it would just go down a neutron number. Or how else could it decay? AUDIENCE: Alpha decay. MICHAEL SHORT: Alpha decay. Let's see, yeah, a lot of those will-- the heavier ones tend to do alpha decay. What would it do at alpha decay? For alpha, I guess it will be going that direction, right? You know what? I'm not going to rule that out yet. So let's go with that. How else could they decay? AUDIENCE: Through beta decay. MICHAEL SHORT: Through beta decay, let's say in that direction. Pretty much all these happen, just not necessarily in this order. When you have a really, really asymmetric nucleus, a lot of these fission products will emit neutrons almost instantaneously in the realm of like 10 to the minus 17 seconds, some incredibly short timeline. You will start to decay downwards a little bit. But you're not quite at the stability line, which is why a lot of the fission products then go on. And they deposit their kinetic energy by bouncing around the different atoms in material creating heat. But a lot of them will also send off betas or gammas. And it may take 10 to the minus 13 seconds for them to whatever the half-life of that particular isotope is. And after around, let's say, 10 to the minus 10 to 10 to the minus 6 seconds, depending on the isotope in the medium, those two fission products will stop. And let's just say that they stop there. So the whole process of fission, it's actually quite a compound process. First, the neutron is absorbed, forming a compound nucleus. Then it splits apart. Then those individual fission products undergo whatever decays suit them best. And that's the source of the neutrons in fission. Sometimes one of those fission products might be particularly unstable. And it might send off two neutrons. In other cases, though I don't know of one off the top my head, it might be none. But this is the whole timeline of events in fission and the justification for why this happens straight from the first month of 22.01. And I wanted to pull up some of the nuclear data so you can see what these values tend to look like and also where to find them. I'm going to do that screen cloning thing again. There we go. So I've already pre-pulled up the JANIS library. I've already clicked on uranium-235. Thanks to you guys, I have all the data now on my shirt so you can see a little better. I also have it on the screen. So let's look at this value right here, nu bar total, neutron production. And I'll make it bigger so it's easier to see. Did I click on the right one? Yeah. So take a look at that. The total number of neutrons produced during U-235, for most energies it's hovering around the 2.4 or so. There's been arguments about whether it's 2.43 or 2.44. And that's a linear scale. That's not very helpful. Let's go to a logarithmic scale. That's more like what I'm used to seeing. Most of the fission happens for U-235 in the thermal region, in the region where the neutrons are at values, let's say, the cutoff is usually about one electron volt or lower in average energy. And nu bar is fantastically constant at that level. Then as you go up and up in energy, you start to make more and more neutrons. Why do you guys think that would be the case? What are you doing to that compound nucleus as you increase the incoming neutron energy? AUDIENCE: It's going to have more energy. MICHAEL SHORT: It's going to have more energy itself. You might excite other nuclear states that can then lead to other sorts of decays or other neutron emission. So to me, that's the reason why, once you hit about 1 MeV, you can start to see a lot more neutrons being given off. The reason we usually treat this as a constant, notice I haven't given it an energy dependence, is because most of the fission that happens is at thermal energies. For that, I want to show you the fission cross section. There are a lot of cross sections. And it's probably going to be on a different graph, because it's in different units. And this gives you a rough measure per atom, what's the probability of fission happening as a function of incoming neutron energy? At those high energies, you have relatively low cross sections, or low probabilities, of fission happening. Then there's this crazy resonance region that looks like a sideways mustache. But then as you get down to the lower energy levels, it gets much more, in fact, exponentially more, likely that fission will happen. So almost all the fissioning in a light water reactor, or any sort of other thermal reactor, happens at thermal energies. And that's why we take nu bar as a constant. You don't have to, especially if you're analyzing what's called a fast reactor or a reactor whose neutron population remains fast on purpose. And so with that, I want to launch into some of the different types of reactors that you might see. And you guys already did those calculations in problem set one, so I don't have to repeat them for you. Let's get right into the acronyms. So if you haven't figured this out already, nuclear is a pretty acronym dense field. Can anyone say they know all the acronyms on this slide? You're going to know about 90% of them in about 90 minutes. So it's OK. Or you'll have seen them at least. Any look completely unfamiliar? AUDIENCE: Most of them. MICHAEL SHORT: Most of them? [LAUGHTER] Well, let's knock them off. So [INAUDIBLE],, last Thursday, already showed you the basic layout of a boiling water reactor, one of the types of light water reactors. And the reason that this is a thermal reactor is because it's full of water. Water, as we saw in our old q equation argument, is very good at stopping neutrons, because, if you guys remember this, the maximum change in energy that a neutron can get is related to alpha times its incoming energy. Or this alpha is just A minus 1 over A plus 1 squared. And I think this would actually be a 1 minus right there. A is that mass number of whatever the neutrons are hitting. And that one comes directly from the neutron mass number. If you remember, this was the simplest reduction of the q equation, the generalized q equation for kinematics that we looked at. When I said let's do the general form, then OK, let's take the simplest form, neutron elastic scattering. Here's where it comes back. If a neutron hits water, which is made mostly of hydrogen, and A is 1, then it can transfer a maximum of all of its energy, let's say, to that hydrogen atom, therefore, giving the neutron no energy and thermalizing it or slowing it down very quickly. To show you what one of these things actually looks like, that's the underside of a BWR. Did [INAUDIBLE] show you this before? OK. So you've already seen what this generally looks like. What about the turbine? Has anyone actually seen a turbine this size close up, a gigawatt electric turbine? I'm trying to see which one of those pixels is a person. I don't see anything person-sized. There's a ladder that looks to be about 6 feet tall, so to give you guys a sense of scale of the sort of turbines that we say, oh, yeah, we draw a turbine on our diagram. Well, it's not actually that simple . These things take up entire hallways, or kind of airport hangar sized buildings. I've never seen one in the US, but I've seen one in Japan. It was a lot cleaner than this. But, otherwise, it looked pretty much the same. And the way this actually works, for those who haven't taken any thermo classes yet, is this turbine is full of different sets of blades that are curved at an angle so that when steam shoots in, it transfers some of its energy to get the turbine rotating. And there's going to be a generator, kind of like an alternator, to generate the electricity there, which looks to be roughly 100 feet away. Just to give you a sense of scale for this stuff. As [INAUDIBLE] showed you, a pressurized water reactor is another kind of light water reactor with what's called an indirect cycle. So this water stays pressurized. It also stays liquid, which is good for neutron moderation or slowing down. Because in addition to the probability of any interaction, some probability sigma, if you want to get the total reaction probability, you have to multiply by its number density to get a macroscopic cross section. This is why I introduce this stuff way at the beginning of class, so you'd have time to marinate in it and then bring it back and remember what it was all about. And so every single reaction that goes on in a nuclear reactor has got its own cross section. We'll probably need half the board for this one. You can say you have a total microscopic cross section. These are all going to be as a function of neutron energy. What's the probability of anything happening at all? And these are actually tabulated up on the JANIS website. So let's unclick that, get rid of neutron production, and go all the way to the top, n comma total. So all this stuff is written in nuclear reaction parlance, where if you have, let's say, n comma total, that means a neutron comes in, and that's the reaction that you're looking at. So this data file here, once I open it up, will give you the probability that anything at all will happen. You can see as the neutron energy gets higher, the probability of anything happening at all gets less, and less, and less. And it follows the shape of most of the other cross sections. And I'm going to leave this up right there. You've also got a few different kinds of reactions. You can have a scatter. Let's call that scatter, which we've already said can either be elastic or inelastic. It may not matter to us from the point of view of neutron physics whether the collision is elastic or inelastic. All that matters is the neutron goes in, and a slower neutron comes out. Because what we're really concerned with here is tracking the full population of neutrons at any point in the reactor. So we'll give this a position vector r, which has just got x, y, and z in it or whatever other coordinate system you might happen to use. I prefer Cartesian, because it makes sense. At every energy going in any direction, so we now have a solid angled vector that's got both theta and phi in it any given time. And the whole goal of what we're going to be doing today and all of next week is to find out, how do you solve for and simplify this population of neutrons? Make sure to fill that in as velocity. Let's see. Let me get back to the cross sections and stuff. If we want to know how many neutrons are in a certain little volume element, in some d volume, in some certain little increment of energy, dE, traveling in some very small, solid angle, d omega, supposedly, if you have this function, then you know the direction, and location, and speed of every single neutron everywhere in the reactor. And this is eventually what the goal of things like Ben and Kord's group does, the Computational Reactor Physics Group, is solve for this or a simplified version of it, over, and over, and over again for different sorts of geometries. And in order to do so, you need to know the rates of reactions of every kind of possible reaction that could take a neutron out of its current position, like if it happens to be moving, which most of them are, out of its current energy group. Which pretty much any reaction will cause the neutron to lose energy. What's the only reaction we've talked about where the neutron loses absolutely no energy? It's a type of scattering. AUDIENCE: Forward scattering? MICHAEL SHORT: Yep, exactly, forward scattering. So for forward scattering for that case where theta scattering equals 0. Again, you missed. The neutron didn't actually change direction at all. And, therefore, it didn't transfer any energy. But for everything else, for every other possible reaction, there's going to be an energy change associated with it and probably some corresponding change in angle, because a neutron can't just be moving, and hit something, and continue moving more slowly. There's got to be some change in momentum to balance along with that change of energy. And it might slightly move in some different direction. And all this is happening as a function of time. As you can see, this gets pretty hairy pretty quick. That's why we put the full equation for this on our department t-shirts. But no one ever solves the full thing. What we're going to be going over is, how do you simplify it into something you can solve with a pen and paper or possibly a gigantic computer? But it's not impossible. So inside this sigma total, we talked about different scattering. And then you could have absorption in all its different forms. What sort of reactions with a neutron would cause it to be absorbed? AUDIENCE: Fission. MICHAEL SHORT: Yes, fission. Thank you. So there's going to be some sigma fission cross section as a function of energy. And if it doesn't fizz, but it is absorbed, we'll call that capture. But capture can mean a whole bunch of different things too, right? There could be also a whole bunch of other nuclear reactions. There could be a reaction where one neutron comes in, two neutrons go out, like we looked at with beryllium in the Chadwick paper from the first day or like what actually does exist for this stuff. So JANIS doesn't like multi-touch, so you have to bear with me on the small print on the screen. But there should be-- yep, here it is. Cross section number 16, there is a probability that one neutron goes in. That z right there is whatever your incoming particle happens to be. And in this case, we know it's a neutron, because we picked incident neutron data. And 2n means two neutrons come out. Let's plot that cross section. You can see that the value is 0 until you hit about 4 or 5. Oh, it's actually 5.297781 MeV. So that's the q value at which this particular reaction happens to turn on. Might be responsible for a little bit of the blip in the total cross section. So technically, if we were to turn on every single cross section in this database, it should add up to that red line right there. So you can start to get an idea for how much of all the reactions of uranium-235 are due to fission. That's the one we want to exploit. So let's find fission, right down there. Oh, wow, there's a 3n reaction. I want to see that. That doesn't happen until 12 MeV. Yeah. So neutrons don't typically tend to hit 12 MeV in a fission reactor. So this is a perfect flimsy pretext to bring in another variable. It's called the chi spectrum or what's called the fission birth spectrum. Yeah. We've already talked about the neutrons being born and how many there were. But we didn't say at what energy they're born. In fusion reactors, this is pretty simple. You've already looked at this case. What is it? 14.7 MeV. That's a lot simpler. That's the fusion. For fission, it's not so simple. For the case of fission, if you draw energy versus this chi spectrum, it takes an interesting looking curve from about 1 MeV to about 10 MeV with the most likely energy being around 2 MeV. So you aren't really going to get neutrons at the energy required for a 3n reaction in a regular fission reactor, just not going to happen. But it's good that you know that that exists. So let's go and answer my original question. How much of the total cross section is due to fission? Most of it, especially at low energies. So let me get rid of those 2n and 3n ones, because they're kind of ruining our data. It's making it harder to see. That's better. So you can see at energies below around, let's say, a keV or so, almost all of the reactions happening with neutrons in uranium-235 are fission. This is part of what makes it such a particularly good isotope to use in reactors. The other one is, you can find it in the ground, unlike most of the other fissile isotopes, unlike, I think, any of the other fissile isotopes. Thorium you got to breed and turn it into uranium-233. I'll have to think about that one. But then you start to look at, what are the other components of this cross section, like zn prime, inelastic scattering, which doesn't turn on until about 0.002 MeV, but later on is one of the major contributors and actually is responsible for-- wait, I've brought this for a reason. --is responsible for that little bump in the total cross section. So eventually all these things do matter. But let's think about which ones we actually care about at all, because what we eventually want to do is develop some sort of neutron balance equation. If we can measure the change in the number of neutrons as a function of position, energy, angle, and time, as a function of time, and that would probably be a partial derivative, because there are like seven variables here. Before I write any equations, it's just going to be a measure of the gains minus the losses. And while every particular reaction has its own cross section, there's only going to be a few that we care about. There will only be one or two types of reactions that can result in a gain of the neutron population into a certain volume with a certain energy with a certain angle. And for losses, there's only one we really care about, total, because any interaction with a neutron is going to cause that neutron to leave this little group of perfect position, energy, and angle. So that's where we're going. We'll probably start down that route on Tuesday, because I promised you guys context today. You've all been to the MIT Research Reactor. A couple of you-- are you running it yet? AUDIENCE: Yeah. MICHAEL SHORT: Awesome. OK. Yeah. Yeah, so Sarah and Jared are doing that. Anyone else training or trained? No. I'd say folks are usually pretty scared when they find out MIT has a reactor. And they're even more scared when they find out you guys run it. AUDIENCE: Yeah. MICHAEL SHORT: What they don't realize is there's been basically no problems since 1954. The only one I know of is someone fell asleep at the controls once and forgot to push the Don't Call Fox News button, and it called Fox News or something. So there was a big story about, asleep at the helm, ignoring all of the alarms, and passive safety systems, and backup operators, and everything else that actually made sure that nothing happened. But nowadays, correct me if I'm wrong, you actually have to get up every half hour, reach around a panel, and hit a button, right? AUDIENCE: No. It's on console, but it beeps at you. MICHAEL SHORT: Ah. AUDIENCE: Yeah, it's pretty tiring. MICHAEL SHORT: So you want to hit it before it beeps at you. AUDIENCE: It's reminding you to take hourly logs. MICHAEL SHORT: OK. AUDIENCE: It does go off every half hour. AUDIENCE: It is half hour, but you we don't do [INAUDIBLE].. MICHAEL SHORT: Ah, OK, yeah. I'd heard the button's every half hour. Gotcha. Cool. Yeah, so for all of you watching on camera or whatever, just know that these guys got it under control. So onto some gas cooled reactors and to explain some of these acronyms. There are some that use natural uranium, though pretty much all the ones in this country, you need to enrich the uranium to get enough U-235 to turn the reaction on. But you don't have to do that in every case. And you'll also see these acronyms, LEU, MEU, or HEU, standing for Low, Medium, or High Enrichment. The accepted standard for what's low enriched uranium is 20% or below. An interesting fact, though, you can't have something at 19.99% enriched uranium and expect it to be low enriched uranium, because every measurement technique has some error. And what really determines if it's LEU is when an inspector comes and takes a sample, it better be below 20% including their error. So you'll usually see 19.75% given as the LEU limit, because there's always some processing error, inhomogeneities, measurement error. Hedge your bets, pretty much. Like in England or the UK, the advanced gas reactors have been churning along for decades. They actually use CO2 as the coolant, which is relatively inert. And they use graphite as the moderator. So in this case, the coolant and the moderator are separate, unlike the light water reactors we have. So this way, the graphite, right here, just sits in solid form and slows down the neutrons, not quite as good as water, but pretty good. There is an issue, though, that CO2, just like anything, has a natural decomposition reaction, where CO2 naturally is in equilibrium with CO and O2. And O2 plus graphite yields CO2 gas. Graphite was solid. In talking with a couple folks from the National Nuclear Laboratory, they said that 40 years later, when they took the caps off these reactors, a lot of that graphite was just gone with a good explanation. It vaporized very, very, very slowly over 40 years or so due to this natural recombination with whatever little bit of O2 is in equilibrium with CO2 and possibly some other leaks. I'm sure I wouldn't have been told that if there was a leak. So I'd say the feasibility is high, because they've been running for almost half a century. The power density is very low. Why do you guys think that's the case? Yeah. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Mm-hm. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Absolutely. So well, let's say, you need the same cooling capacity, but you're right. CO2, even if pressurized, is not as good a heat transfer medium as water. Water is dense. It's also got one of the highest heat capacities of anything we've ever seen. The other reason is right here. If you want enough reaction density, then it not only matters what the per atom density is, but what the number density is. And if you're using gaseous CO2 coolant, even if it's pressurized, there are fewer reactions happening per unit volume, because there are few CO2 molecules per unit volume than water would have. So that's why we pressurize our light water reactors, to keep water in its liquid state where it's a great heat absorber, takes a lot of energy to boil it, and it's really dense so it's a very effective dense moderator. These have been around forever. Let me think. When did Windscale happen? Windscale was also the source of an interesting fire that you guys might want to know about. It's one of those only nuclear disasters that hit 7 on the arbitrary unit scale. I don't quite know how they determine what's a seven. But there was a fire at the Windscale plant due to the build up of what's called Wigner energy. It turns out that when neutrons go slamming around in the graphite, they leave behind radiation damage. And when my family always asks me to explain, what do you do for a living? And I can only think, well, they don't know radiation damage. They've watched Harry Potter. I'd like to say, radiation, like dark magic, leaves traces. Well, it leaves traces in the graphite in the form of atomic defects, which took energy to create. So by causing damage to the graphite, you store energy in it, which is known as Wigner energy. And you can store so much that it just catches fire and explodes sometimes. That's what happened here at Windscale. 11 tons of uranium ended up burning, because all of a sudden, the temperature in the graphite just started going up for no reason, no reason that they understood at the time. It turns out that they had built up enough radiation damage energy that it started releasing more heat. And releasing more heat caused more of that energy to be released, and it was self-perpetuating until it just caught fire and burned 11 tons of uranium out in the countryside. This was 1957. So again, a 7 on the scale with no units of nuclear disasters. Argue it's probably not as bad as Chernobyl, so they might want a little bit of resolution in that scale. There's another type of gas cool reactor called the Pebble Bed Modular Reactor, a much more up and coming one, where each fuel element-- you don't have fuel rods. You've actually got little pebbles full of tiny kernels of fuel. So you've got a built-in graphite moderator tennis ball sized thing with lots of little grains of sand of UO2 cooled by a bed of flowing helium or something like that. And then that helium, or the other gas, transfers heat to water, which goes in to make steam and goes into the turbine like I showed you before. So this is what the fuel actually looks like. Inside each one of these tennis ball spheres of mostly graphite, there's these little kernels of uranium dioxide about a half a millimeter across covered in layers of silicon carbide, a really strong and dense material that keeps the fission products in, because the biggest danger from nuclear fuel is the highly radioactive fission products that due to their instability are giving off all sorts of awful, for anywhere from milliseconds to mega years, after reactor operation. And so if you keep those out of the coolant, then the coolant stays relatively nonradioactive. And it's safe to do things like maintain the plant. Then there's the very high temperature reactor, the ultimate in acronym creativity. It operates at a very high temperature, which has been steadily decreasing over time, as reality has caught up to expectations. When I first got into this field, they were saying, we're going to run this at 1100 Celsius. Then I started studying material science. And I was like, yeah, nothing wants to be 1100 Celsius. By that time, they downgraded it to 1000. Now they've asymptoted it at around 800 or 850 due to some actual problems in operating things in helium. It's not the helium itself, but the impurities in the helium that could really mess you up. And the sorts of alloys that they need to get this working, these nickel superalloys, like Alloy 230, they can slightly carburize or decarburize depending on the amount of carbon in the helium coolant. Either way you go, you lose the strength that you need. So I'll say feasibility is low to medium, because, well, we haven't really seen one of these yet. Then onto water cooled reactors. Has anyone here heard of the reactors they have in Canada, the CANDU reactors? That's my favorite acronym. I hope that was intentional. It what? AUDIENCE: It's convenient. MICHAEL SHORT: Yeah. [LAUGHS] It's not like the-- well, they're not sorry about anything, but whatever. At any rate, one of the nice features about this is you can actually use natural uranium, because the moderator is heavy water. You have to look into what the sort of cross sections are. Even though deuterium won't slow down neutrons as much as hydrogen will-- where did my alpha thing-- oh, it was right here all along. Even though A is 2 instead of 1 for deuterium, it's absorption cross section, or specifically-- yeah, because it doesn't fission. Its absorption cross section is way lower than that of water. It actually functions as a better moderator, because fewer of those collisions are absorption. And because you have a better neutron population and less absorption, you don't need to enrich your uranium. You also don't need to pressurize your moderator. So you can flow some other coolant through these pressure tubes and just have a big tank of close to something room temperature unpressurized D2O as your moderator. The problem with that is D2O is expensive. Anyone priced out deuterium oxide before? Probably have at the reactor, because I know you have drums of it. AUDIENCE: It's like a couple thousand per kilogram. MICHAEL SHORT: A couple thousand a kilo, it's an expensive bottle of water. It'll also mess you up if you drink it, because a lot of it, even if it's crystal clear, filtered D2O, a lot of what the cellular machinery depends on the diffusion coefficients of various things in water, those solutes in water. And if you change the mass of the water, then the diffusion coefficients of the water itself, as well as the things in it, will change. And if you depend on, let's say, exact sodium and potassium concentrations for your nerves to function, a little change in that can go a long way towards giving you a bad day. And actually, we have a little piece of one of these pressure tubes upstairs if anyone wants to take a look. There's all these sealed fuel bundles inside what they call a calandria tube, just a pressurized tube that's horizontal. The problem with some of these is if these spacers get knocked out of place, which they do all the time, those tubes can start to creep downward and get a little harder to cool or touch the sides and change thermal. And now I'm getting into material science. It's a mess. Then there's the old RBMK, the reactor that caused Chernobyl. You can also use natural uranium or low enriched uranium here. The problem though that led to Chernobyl-- one of the many problems that led to Chernobyl was, you've got all this moderator right here. So if you lose your coolant, let's say you had a light water reactor and your coolant goes away, your moderator also goes away, which means your neutrons don't slow down anymore. That one reaction is messing up. There we go. Which means your neutrons don't slow down anymore, which means the probability of fission happening could be like 10,000 times lower. So losing coolant in a light water reactor, temperature might go up, but it's not going to give you a nuclear bad day. In the RBMK reactor, it will and it did. And in addition, the control rods, which were supposed to shut down the reaction, made of things like boron 4 carbide, or hafnium, or something with a really high capture cross section were tipped with graphite to help them ease in. So you've got moderator tipped rods, which induce additional moderation, which helps slow down the neutrons even more to where they fission even better. And that's what led to what's called a positive feedback coefficient. So the more you tried to insert the control rods and the more you tried to fix things, the worse things got in the nuclear sense. And in something like a quarter of a second, the reactor power went up by like 35,000 times. And we'll do a millisecond by millisecond rundown of what happened in Chernobyl after we do all this neutron physics stuff when you'll be better equipped to understand it. But suffice to say, there were some positive coefficients here that are to be avoided at all costs in all nuclear reactor design. In the actual reactor hall you can go and stand on one of these things. It's a very different design from what you're used to. I don't think anyone would let you stand on top of a pressure vessel. First, your shoes would melt, because they're usually at like 300 Celsius or so. And second of all, you'd probably get a little too much radiation. But this is actually what an RBMK reactor hall looks like for one of the units that didn't blow up. There were multiple units at that site. Then there's the supercritical water reactor. Let's say you want to run at higher temperatures than regular water will allow you to. You can pressurize it so much that water goes beyond the supercritical point in the phase sense and starts to behave not like liquid, not like a gas, but somewhere in between, something that's really, really dense, so getting towards the density of water, not quite, which means it's still a great moderator, but still can cool the materials quite well to extract heat to make power and so on and so on. Yeah. AUDIENCE: So supercritical refers to the coolant not the neutrons? MICHAEL SHORT: Good question. For a supercritical water reactor, it most definitely refers to the coolant. It's the phase of the coolant where it's beyond the liquid gas separation line, and it's just something in between. Any of these reactors can go supercritical, where you're producing more neutrons than you're consuming. And that is a nuclear bad day. But the supercritical water reactor does not refer to neutron population, just a coolant. Good question. It's never come up before. But it's like, should have thought of that. And so then my favorite, liquid metal reactors, like LBE, or Lead-Bismuth Eutectic. It's a low melting point alloy of lead and bismuth. Lead melts at around 330 Celsius, bismuth 200 something. Put them together, and it's like a low temperature solder. It melts at 123.5 Celsius. You can melt it in a frying pan. This is nice, because you don't want your coolant to freeze when you're trying to cool your reactor, because imagine something happens, you lose power. The coolant freezes somewhere outside the core. You can't get the core cool again. That's called a loss of flow accident that can lead to a really bad day. And the lower your melting point is the better. Sodium potassium is already molten to begin with. Sodium melts at like 90 Celsius. And when you add two different metals together, you almost always lower the melting point of the combination. In this case, forming what's called the eutectic, or a lowest possible melting point alloy. The sodium fast reactor has a number of advantages, like you don't really need any pressure. As long as you have a cover gas keeping the sodium from reacting with anything, like the moisture in the air, or any errant water in the room, you can just circulate it through the core. And liquid metals are awesome heat conductors. They might not have the best heat capacity, as in how much energy per gram they can store like water. But they're really good conductors with very high thermal conductivity. They also are really good at not slowing down neutrons. So these tend to be what's called fast reactors that rely on the ability of other isotopes of uranium, like uranium-238, to undergo what's called fast fission. And I want to show you what that looks like. Let's pull up U-238 and look at its fission cross section. And you might find that it should look a fair bit different. So we'll go down to number 18 to fission cross section, very, very different. So U-238 is pretty terrible at fission at low energies. It's pretty good at capturing neutrons. This is where we get plutonium-239, like you guys saw on the exam. But then you go to really high energies and all of a sudden, it gets pretty good at undergoing fission on its own. And so the basis behind a lot of fast reactors is a combination of making their own fuel and the fact that uranium-238 fast fissions even better than it thermal fissions. So something good for you to know, even though it's not a fissile fuel, that's light water reactor people talking. You can get it to fission if the neutron populations higher. Now, there's some problems with this. It takes some time for neutrons to slow down from 1 to 10 MeV to about 0.025 eV. If your neutrons don't need to slow down and travel anywhere, and pretty much all they have to do is be born and absorbed by a nearby uranium atom, the feedback time is faster in these sorts of reactors. They're inherently more difficult to control. And you can't use normal physics like thermal expansion of things that might happen on the order of micro to nanoseconds if it takes less time than that for one neutron to be born and find another uranium atom. You can still use it somewhat, but not quite as much. So it's something to note backed up by nuclear data. And that's what one of them actually looks like. These things have been built. That's a blob of liquid sodium on the Monju reactor in Japan. And where I was all last week in Russia, they actually have fleets of fast reactors. Their BN-300 and BN-600 reactors are 300 and 600 megawatt sodium cooled reactors. One of them in the Chelyabinsk region they use pretty much for desalination down in the center of Russia, where there's no oceans nearby and probably dirty water. They actually use that to make clean water. They also use this for power production and for radiation damage studies. So when it comes to radiation material science, these fast reactors are really where it's at. Yeah, you just noticed the bottom. I went to Belgium, to their national nuclear labs, where they have a slowing sodium test loop. It's not a reactor, but it's like a thermal hydraulics and materials test loop. And I asked a simple question. Where's the bathroom? And they started laughing at me. And they said, we're not putting any plumbing in a sodium loop building. You'll have to go to the next building over. And that's when I noticed, there weren't any sprinkler systems or toilets. But every 15 or 20 feet, there was a giant barrel of sand. That's the fire extinguisher for a liquid metal fire is you just cover it with sand, absorb the heat, keep the air out, the moisture out, wick away the moisture or whatever else sand does. I don't know. But you can't use normal fire extinguishers to put out a sodium fire. AUDIENCE: When you said sand, I thought of kitty litter. MICHAEL SHORT: Ah. I don't know if that would work. [LAUGHTER] I guess it's worth a shot. [LAUGHTER] With glasses, and safety, and stuff, of course. And the ones that I spent the most time working on, like I showed you in the paper yesterday, is the lead or lead-bismuth fast reactor. This one does not have the disadvantages of exploding like sodium. It does have the disadvantage, like I showed you yesterday, of corroding everything, pretty much everything. And so the one thing keeping this thing back was corrosion. And I say the ultimate temperature is medium, but higher soon. Hopefully, someone picks up our work and is like, yeah, that was a good idea, because we think it can raise the outlet temperature of a lead-bismuth reactor by like 100 Celsius as long as some other unforeseen problem doesn't pop up, and we don't quite know yet. These things also already exist in the form of the Alfa Class attack submarines from the Soviet Union. These are the only subs that can outrun a torpedo. So you know that old algebra problem, if person A leaves Pittsburgh at 40 miles an hour and person B leaves Boston at 30 miles an hour, where do the trains collide or I forget how it actually ends? Well, in the end, if a torpedo leaves an American sub at whatever speed and the Alfa Class submarine notices it, how close do they have to be before the torpedo runs out of gas? So what I was told by the designer of these subs, a fellow by the name of Georgy Toshinsky, when he came here to talk about his experience with these lead-bismuth reactors is, there is a button on the sub that's the Forget About Safety, It's a Torpedo button. Because if you're underwater in a lead-bismuth reactor and a torpedo is heading at you, you have a choice between maybe dying in a nuclear catastrophe and definitely dying in a torpedo explosion. Well, that button is the I Like Those Odds button. And you just give full power to the engines and whatever else happens, happens. The point is, you may be able to outrun the torpedo. And quite popular nowadays, especially in this department, is molten salt cooled reactors that actually use liquid salt, not dissolved, but molten salt itself as the coolant. It doesn't have as many of the corrosion problems as lead or the exploding problems as sodium. It does have a high melting point problem though. They tend to melt at around 450 degrees Celsius. But there's one pretty cool feature. You can dissolve uranium in them. So remember how in light water reactors the coolant is also the moderator? In molten salt reactors, the coolant is also the fuel, because you can have principally uranium and lithium fluoride salt co-dissolved in each other. And the way you make a reactor is you just flow a bunch of that salt into nearby pipes. And then you get less, what's called, neutron leakage, where in each of these pipes once in a while uranium will give off a few neutrons. Most of them will just come out the other ends of the pipes, and you won't have a reaction. When you put a whole bunch of molten salt together, most of those neutrons find other molten salt. And the reaction proceeds. And it's got some neat safety features. Like if something goes wrong, just break open a pipe. All the salt spills out, becoming subcritical, because leakage goes up. It freezes pretty quickly, and then you must deal with it. But it's not a big deal to deal with it if it's already solid and not critical. So it's actually five of. It's zero of five of. I'll stop here. On Tuesday, we'll keep developing the many, many different variables we'll need to write down the neutron transport equation, at which point you'll be qualified to read the t-shirts that this department prints out. And then we'll simplify it so you can actually solve the equation. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 18_IonNuclear_Interactions_II_Bremsstrahlung_XRay_Spectra_Cross_Sections.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MIKE SHORT: All right. So I am super excited about today because this is, in my opinion, the highest point of the apex of the course where we're going to put together everything you've done so far and start to explain things like Bremsstrahlung, radiation damage, X-ray spectra that you get in a scanning electron microscope, and actually find a way where cross-sections are areas. Remember before, I told you guys cross-section is measured in barns, in centimeters squared? Think of it kind of like an area. We'll actually be able to show mathematically that some of them do derive from actual areas. So it will make a lot more sense. It's not just an abstract concept. What I want to do is a quick review of the ionization and excitation collisions that we did last time. Remember we had this imaginary hollow cylinder where we said that there is some ion traveling in this direction with charge, let's say, z times the electron charge and it's colliding, kind of, with an electron somewhere else separated by some impact parameter b. And let's say this hollow cylinder head a shell of thickness db. And we started off with this situation where we wanted to say-- we can find the y momentum as the integral of the y force. We did something. And one of the intermediate steps we came up with was that our energy in part of the electron is p squared over 2 times mass of the electron, which came out to 2z squared e to the 4th over the mass of the electron impact parameter squared velocity squared. Then we multiplied by the electron density in the material, which was the number density of atoms times z, the number of electrons per atom, times the area, the cross-sectional area of this hollow cylindrical shell, which came out to 2 pi b, which is the circumference of that circle right there, times db dx. I'm just going to leave that there for now, and we'll come back to it in a second. I will mention, though, that at the end, we came up with our stopping power expression. Let's call this ionizations that came out to 4 pi, this constant k0 squared, which comes from the Coulomb force law, little z squared, big Z, e to the 4th over mass of the electron velocity squared-- I'm running out of room-- times log of mass of the electron v squared over this mean excitation energy, which comes out to around, let's say, 10 to 19 electron volts times z. And what this meant was that if we graph stopping power as a function of energy, there's a couple components to it. One of them is this roughly 1 over energy component. So let's say there's a component of it-- yes, I love when I can do that-- that actually-- that follows 1 over e. And there's this logarithmic component that goes that way. And if we sum them together, we ended up with a much higher stopping power at low energies. But around this local minimum of about 3 times the mass of the electron c squared, it starts to increase again. And that's due to this. How many excitations can you make as a function of energy? So if you think about it, this is kind of like an energy term. How much energy do you have divided by how many ionizations can you make? So it's kind of an intuitive result in that it's how much energy you've got versus how much energy it takes for a single unit process. And that's all mediated by this 1 over e term. The last thing that we said is that this curves back down here. And that's because at around 500 times this mean excitation energy, which as you can see is around 1 keV times z-- it's not a very high energy-- you start to get charged neutralization. The reason for that is that for-- in order for this formula to work, we have to assume that the deflection is really, really, really small, infinitesimally small. If it's not, and if the ion manages to catch that electron, it's not going to lose energy by undergoing a Coulomb interaction. It's going to lose energy by absorbing that electron and neutralizing. And so for really low energy, the ions are moving so slowly that they can capture some of those electrons. And it gets less and less effective at stopping the material. And this led to-- let's see-- this, we're distance, and this, we're dt dx times the energy required to make an ion pair, which we'll just call this i. We ended up with a curve that looked something like this. This right here we call the range. We just gave that the symbol r. So that's the review of last time, but I left some extra space for some reasons which will become clear in about half an hour. Right now I want to take you through a little bit of a whirlwind in terms of Bremsstrahlung. I'm not going to derive anything about it. We're just going to go over what the cross-sections and stopping powers look like, why they take that form intuitively. But we're not going to go through a rigorous derivation because, well, we simply don't have time. But I do want you to know what sort of things exist. So for Bremsstrahlung, better known as braking radiation-- who's actually heard of this before? Does anyone know roughly-- let's say if we were to say some cross-section for Bremsstrahlung. Let's call it cross-section radiative. Because in this case, we're actually talking about the charged particle radiating away photons. So let's say we had some nucleus of charge big Z, and we had some ion-- we'll look at a different color-- some ion of charge little Z moving towards it. It will either be attracted or repelled depending on what the sign looks like. And sometimes you might get radiation of a photon. Let's call it of energy E equals hc over lambda. That'll become important in a sec. So this radiative cross-section-- I don't care about-- particularly care about the exact form, but what is it proportional to? What sort of factors do you think would make the emission of that photon more or less likely? Yeah? AUDIENCE: The energy of the charged particle. MIKE SHORT: Sure, the energy of the charged particle. So you can-- there's one expression I like you. Can only take my money until you take it all. Well, the same thing goes for the energy of a photon. You can't radiate any more than the energy of the particle coming in, so there's going to be some maximum energy which is going to correspond to some minimum wavelength, which is-- let's say if we just put our initial energy in here, that gives us our minimum wavelength right there. That's true. And it can radiate at any energy smaller than that or any wavelength larger than that. I'm basically saying the same thing. Because if this particle started off further away and felt less of a deflection, it could still emit a photon, but of a longer wavelength. Don't have room to draw enough wavelengths, but I want to make sure that's physically accurate. So it's actually going to be also proportional to z of the large nucleus. The stronger the pull, the more of that breaking radiation you're going to see. It's actually proportional to z squared. What this says is that heavier z materials produce a lot more of this breaking radiation. And it's also proportional to the inverse of the mass squared. What this says intuitively is a heavier particle deflects less and emits less of this breaking radiation. Hopefully this makes a lot of sense to you guys, that the stronger the pull of the nucleus, the more deflection you're going to get, and the more Bremsstrahlung you'll get. The larger the mass of the incoming ion-- let's say that's mass m-- the less deflection you'll get because the less-- what is it? The less momentum transfer you can apply with the same force if you've got a heavier particle. Does this make sense to everybody? So what this says is it's really important for high z materials. And we'll see a little bit why. If I actually write the full expression for stopping power-- and I promise to you I'm not going to derive it because we don't really have the time for that. It's proportional to the number density. You should always think there will be a number density and stopping power, because the more atoms there are, the more they stop things. It's just directly proportional. Times that kinetic energy plus mec squared. And again, this is not something I want you to memorize, but it is something I want you to be able to decompose and explain why the parts are there. Let's see. Times some radiative cross-section where this sigma radiative is some constant cross-section. This ends up being about 1/500 barns times z squared times this parameter b, which if you see in the reading is actually given just-- this b scales roughly with the atomic-- I'm sorry-- yeah, with the proton number of the material. So you can see that in here, in the stopping power is actually directly the cross-section. So the components of a stopping power-- there's going to be some probability of interaction, and there's going to be some energy transfer part. This is an interesting result to show. This is why I wanted to just write the Bremsstrahlung stopping power. Because in here, actually, is the cross-section times some other stuff. Pretty neat result. And so now I want to show you how our cross-section is actually contained in the ionization stopping power. So let's bring this back down for a sec. You can think of the likelihood that the ion comes off with any particular energy to be directly related to this impact parameter. Because as we saw, the final expression for stopping power is directly related to this parameter b. We ended up integrating over all possible b to get the total stopping power in the material. But if we didn't do that integral-- if we stopped, let's say, at this stage in the game, and we said, all right, well, the stopping power at some fixed b actually depends on the probability that that particle enters into this cross-sectional area right here, this 2 pi b db, that right there is the cross-section for scattering as a function of the incoming energy and the outgoing energy. This is one of the coolest parts, I think, is that there is an actual area right here. It's the area of a hollow circle is the actual cross-section for scattering with a given ingoing and outgoing, outcoming energy. And then the rest of this stuff-- if you pull all this together, if you take a microscopic cross-section times the number of particles that are there-- because this is your atomic number density, and these two together are your electron number density. Like we talked about before with reaction rates and cross-sections, this thing right here is your macroscopic cross-section for an incoming and an outgoing energy contained directly in the stopping power formula, because then we integrated over all possible cross sections, which means all possible outgoing energies for a given incoming energy. And so the last bit, the way to link these two together, which is why I left a little bit of space right here-- we know right now, because I wrote it up there, that the scattering cross-section as a function of the ingoing an outgoing energy per unit energy is just the area of that hollow circle. So let's divide everything by dt. We end up with the total formula for cross-- for the scattering cross-section. We don't know what this b db dt is. What's the differential probability between impact parameter and outgoing energy? We don't quite know, but we can express this as a change of variables that we do know. We do know-- if we have a certain impact parameter, we know what the scattering angle is going to be. There is a well-known relation for that. And there's a derivation in the book and in another book that I want to point out to you guys by Gary Was called Fundamentals of Radiation Material Science on page 32. If anyone wants to see the derivation from which this result came from, you can head right there, and it's free on MIT Libraries. Meanwhile, we do know our relation between this impact parameter and the angle, and we do know a relation between this angle and the outgoing energy. And this is where some of the hard sphere collision stuff comes in. What's the maximum amount of energy that a particle can impart to another particle in some sort of a hard, sphere-like collision? Let's take the easy example. If the two particles have equal mass, how much energy can one particle impart to another? AUDIENCE: All of it. MIKE SHORT: All of it, right? It can impart a maximum of, let's say, this incoming energy ei. As those mass ratios change, you can impart a maximum-- let's say your maximum becomes what's called gamma ei where this gamma right here is 4 times those two masses multiplied over the sum squared. The full expression-- I'm going to use a different color because I'm running out of space here. The full expression for t is actually gamma ei over 2 times 1 minus cosine theta. The two intuitive limits from this is if theta equals pi, then this here equals 2, and t is our t maximum, just gamma times ei. If theta equals 0, that whole thing equals 0. And in our case of forward scattering, no interaction occurs, and the energy imparted is 0. But the important part here is we have a direct relation between t and theta and the angle, which we can put in here. And we actually have a direct relation between b and theta, which I wrote down so I wouldn't forget. So we actually have-- our impact parameter is classical radius of the electron times cosine of angle over 2. You don't have to know where these came from, but the point is we have a relation between B and the angle. We have a relation between angle and energy. So we can just do a change of variables to get our final cross-section. And it ends up being, I think, pi times radius of the electron squared over gamma. So in this way, we can go from known relations between each of the variables and an actual physical cross-section that has units of real area down to an energy-dependent form for this cross-section, which I think is pretty cool. And then this is the one that you would see tabulated in the JANIS tables, like the energy-dependent cross-section for the S reaction or the scattering. So this is one of my favorite parts of this course, because you can see how cross-sections really do follow directly from areas. So now for the other part, now that we've got this Bremsstrahlung stopping power and we've got our ionization stopping power, it's useful to find out which one is more important when. To do that, we just look at their ratios. So if we look at the dt dx from ionizations over the dx dx from radiative energy transfer or Bremsstrahlung-- let me make sure I get this one right. It's proportional to z times mass of the electron over m squared times t over 1,400 rest mass of the electron. So what this tells you here is that-- I'm sorry. I think I have those backwards, because radiative should get more important at higher energies. So what this tells you is for higher z materials, Bremsstrahlung becomes more dominant, and for higher energies, Bremsstrahlung becomes more dominant. So if we want to generalize our stopping power curve from just ionization to everything-- so I know I had another color. No. I ran out of colors. I need a fourth one. There's going to be some Bremsstrahlung component that starts to get more and more important with increasing energy. And so then if you extend this curve, you're going to radiate more and more and more power the higher energy you go-- not from this component or that component of ionization, but from radiation or Bremsstrahlung. So this has some pretty serious implications to answer questions like, how do you shield beta rays? Does anyone have any idea? Based on this formula right here, what would you use to shield beta particles and not irradiate the person standing behind the shield? Let's ask a question everyone knows. What do you use to shield photons really well? Lead, tungsten, something with high z. Because as we saw from before-- I'm going to steal a little bit of the Rutherford stuff. If you graph the energy versus the mass attenuation coefficient, you get a curve that looks like this, but everything increases with increasing z. You get more mass attenuation with increasing z. And also, denser materials tend to be higher z. Is that what you want to do for beta particles? You say-- so Monica, you're saying no. How come? AUDIENCE: Don't you just want something with a low cross-section? MIKE SHORT: That's right. Well, you don't necessarily want something with a low cross-section, or else it might not shield at all, but you are on the right track. You can actually look at the difference between these stopping powers, and cross-sections are embedded in there. So I think that answer is pretty much correct. But also, you're going to get more Bremsstrahlung or more breaking radiation in higher z materials. So if we actually look at what thresholds does this become important in lead, this ratio is about 1 at around 10 MeV, which means that you lose an equal amount of energy to Bremsstrahlung as ionization at 10 MeV for electrons. In water, this ratio is about 1 at 100 MeV. So what this says is if you want to shield electrons or beta particles safely, you actually have to use lower Z materials because they won't make much Bremsstrahlung. But because, like Monica said, then the cross-section is lower, you actually have to use more. So you don't have a choice. You can't just use less high z material. Because while you will stop more of the electrons, they will create more x-rays in the process. And those x-rays are highly penetrating, as we know from these mass attenuation curves. Once you get to high energy, this is-- these are logarithmic scales, so let me correct those and say these are log of e and log of mu over p. It gets millions of times less effective at shielding high energy photons. So that's one of those really important things to note is if you're designing shielding for something, and there are electrons involved that are even around 1 MeV or so, you can't just use high z materials to shield them, or you will create more problems than you solve. That's a pretty important implication. It's quite important for what's called betavoltaic devices. It's kind of a sidetrack, so I'm going to stick it on a board that'll be hidden soon. Has anyone heard of a betavoltaic device? Anyone? What are they? AUDIENCE: It's like a beta source that emits electrons onto a semiconductor [INAUDIBLE].. MIKE SHORT: Yeah, it's a beta battery. All it is is, let's say, some pieces of silicon, some circuit that grabs the power, and a beta emitter. And these beta particles directly hit the silicon, and the movement of those betas constitutes a charge. And it's direct-- it's direct conversion of radiation to electrical energy. They're not very high power, but they last for a very long time. How long? Around a few half lives of that beta decay. So for most of these beta emitters that have half lives in the realm of, like, 10 to 1,000 years, you can make a microwatt battery that could last for millennia. This could be pretty useful. Let's say if you wanted to have some secret sensors in a naughty country like North Korea, you could drop these tiny little beta particles that would just-- betavoltaics that would just trickle charge a battery, make a measurement of-- I don't know-- radiation level, or weight of the dictator, or whatever you happen to want to measure, and send that off once a month or once a year with no need for external monitoring. Or let's say you're designing a mission to land on a comet, like the Rosetta Philae Lander, and your radiothermal isotope generator is going to burn out in, let's say, 10 or 20 years. You might not need that much power just to measure temperature, or light levels, or something else, or a gas that you might want to know what's there. But you have to choose your beta isotope wisely. If you want to make these things in a little chip-- and they actually have been commercialized in a chip that's about that actual size using about two curies of tritium. Anyone have any idea why one would choose tritium? AUDIENCE: It's got a short half-life. MIKE SHORT: Yeah, it's got a short half-life, so you can get a lot of power out of it. That's one of the two correct reasons. And what is the other one? Lets see who's memorized there KAERI table of nuclides. What do you think its beta decay energy would have to be for this not to blast anyone in the vicinity? AUDIENCE: Low. MIKE SHORT: Very low. Why do you say that? AUDIENCE: [INAUDIBLE] they don't penetrate all the way through the [INAUDIBLE]. MIKE SHORT: That's true. Their range is much smaller. But the range of all betas is pretty low in materials. But the answer lies right here-- less Bremsstrahlung. Lower energy betas give most of their energy off in ionization rather than by radiating Bremsstrahlung. So you can have a device with two curies of tritium, which if that's released to the outside world, that's bad news. That's something that you might have to report. But as long as it stays contained in this device, it does not have enough energy to produce many x-rays from Bremsstrahlung. And therefore, it does not require an enormous amount of shielding. So you can't just pick a 1 MeV beta emitter which you might get a lot of power out of, because it's also going to be a big, crazy X-ray source that you wouldn't want in a cell phone or a sensor or some other device you might put in your pocket, or even 20 feet from you. Cool. So that's the idea behind Bremsstrahlung. There's a little bit more I want to tell you about, and I'll save that for the sidetrack board. We use Bremsstrahlung in a lot of really interesting applications, including cyclotron, one of which we just took delivery of here at MIT, or a synchrotron. And I'll just briefly explain how these work. In a cyclotron, you've got two D-shaped magnets. They actually call them dees because we're so creative in naming these things. You inject some source of charged particles, and there is some electric field lines across these two dee magnets. And what this says is that in between the magnets, the particle accelerates. And inside each magnet, the path curves. And it accelerates some more. And it's moving even faster, so it takes longer to curve. Than it moves even faster, and it takes longer to curve, and so on and so on, until it finally shoots out the side. And so this is one way that you can have an extremely compact-- and I'm talking like garbage-can-sized-- accelerator that brings things up to about 13 MeV. That's the one that we've got in the basement of Northwest 13. The problem is every time these particles bend, they send off photons, what's known as cyclotron radiation. And the higher energy that is, the more intense that cyclotron radiation gets. So you've got this garbage-can-sized device with a little hole right here, and it's just blasting out photons in all directions in this one plane-- let's just call it the plane of death-- which you don't want to be in, which is why this the thing is behind 4 feet of concrete shielding, and in the middle of a room, to help-- that 1 over r squared keeps your dose down. But we actually use this plane of death in a synchrotron. What it is is it's a circular accelerator. It's not quite circular, so let me correct my drawing a little bit. There are straight segments, and there are slightly curved segments. But it pretty much looks like a circle if you look at it from high up enough. In each of these curved segments, there is a bending magnet. That's my best drawing for a magnet. And what this does is it continuously changes the path of these charged particles going through usually electrons. And you end up with intense beams. Let me use a different color. You end up with intense beams perpendicular to the original path before it went in that bending magnet of synchrotron radiation. So it's kind of like a gigaelectron volt spinning ninja star of death, except at the end of every one of these stations, you have what's called a beam line. Because there's 60 or 80-odd of these beam lines coming off with, let's say, 80 kv and below Bremsstrahlung x-rays, you can use those for a whole lot of different analysis techniques. You can simply irradiate things. You can send those x-rays through a monochromator to select only one wavelength, and then use that wavelength to probe the structure of matter down to the atomic level. There's actually one of these just down in Long Island. About a 2-and-1/2-hour drive from here, there's Brookhaven National Lab. And they just opened up the National Synchrotron Light Source, or NSLS version 2, where they can actually measure distances with single nanometer precision. So inside this beam line is a bigger room which is encased in another room which is encased in another room. And the whole point of that is for vibration and temperature isolation. So they maintain this entire room to within a speck of 0.1 Celsius. And it's the least vibrating place, probably, in the US. I don't know about on the planet. But it's got basically no vibration. So the atoms are effectively standing still except for their normal vibrations in the material. But there's no source of external vibration. And the cooling has to come in through these convoluted channels so as not to blow on the sample, so as not to make any convection currents or temperature changes. And they can actually probe the structure of matter with single-nanometer precision using these synchrotron x-rays all produced by Bremsstrahlung. So it's not all bad. You can use Bremsstrahlung for good. Then there's a little bit-- I have to hijack a little more area from Rutherford scattering. You might think about, well, what is the actual spectrum of this Bremsstrahlung. Well, you can look to see what's the probability that an atom enters into any of these concentric, hollow circles. It looks to be less and less likely that you're going to enter through one of the center rings and more and more likely that you're going to enter through one of the outer rings. If you start farther away, there's less of a pull to change the path of that ion or electron, and the Bremsstrahlung is going to be lower in energy. This is actually described by what's called Cramer's law, which says that the intensity of the Bremsstrahlung as a function of wavelength scales with some constant k, and that constant scales with-- surprise, surprise-- the atomic number of the material times some lambda over lambda minimum minus 1 times 1 over lambda squared. And what this says is that there's some minimum lambda or some maximum energy that you can impart to this Bremsstrahlung, which again, you can only take some energy before you take it all. And there's going to be some sort of a fixed minimum lambda if we draw this intensity. And I graphed this on Desmos just before coming here, so I know it looks something like this where that right there is lambda minimum. It's taking more area. If you then change variables from lambda to the angular frequency where, if you remember, the energy of the photon is just h bar times that frequency-- so it's kind of like converting into energy with just a tiny, little constant in front. And I mean really, really tiny little constant. You end up with an energy relation that looks like some maximum angular frequency or some maximum energy. And this is kind of a simple, linear-looking relation, this 1 over energy relation minus 1. So if we graph energy versus the intensity of the Bremsstrahlung, you end up with a curve something like this where your max energy is the same as your incoming particle energy. Now, who here has done any sort of X-ray or SCM analysis before? You have. So can you tell me, is this the Bremsstrahlung spectrum that you tend to see? AUDIENCE: Well, I've done [INAUDIBLE] analysis with imaging. MIKE SHORT: OK. Have you ever gotten a regular, old X-ray spectrum to see what elements are there? Can you draw what one looks like? AUDIENCE: Maybe. MIKE SHORT: You want to try? They're all the same. So if you remember any particular one, you're correct. Yep. There's some peaks. And then what does this background stuff look like? Yeah. There's some noise and junk on the back of it, right? So this is actually correct. Thank you. And what you actually see here is a bunch of characteristic peaks. These will maybe be like the L lines and the K lines for one element or another, these characteristic X-ray peaks, on top of the Bremsstrahlung, the breaking radiation which constitutes the background here. And what you actually see-- I'm just going to draw the background curve under Julia's curve here-- looks something like this. What happened to the real spectrum? Why don't we observe what actually exists? There are a couple of reasons. Does anybody have an idea? So let's take this to the extreme. Why don't you think you would observe physically-- and this is when we actually get into the real world-- any x-rays with energy in, let's say, the eV range if you were to try and observe any x-rays at all? This is where we actually get into what do these detectors look like. So there will be some active piece of your material if this is your detector. This is most definitely not Rutherford scattering anymore. And there's got to be some window. We can make it as thin as we possibly can. And they make it out of the most X-ray-transparent structural material that they can, which tends to be beryllium. So beryllium has got an atomic number 4. It's the first and lightest element that you can make structural anythings out of. So if you want to protect your detector from, let's say, air or something-- if this were full of air, it would absorb the x-rays, so you want there to be pretty much nothing. You can put a very thin, seven-micron beryllium window in front. But the problem is we've already got one of these mass attenuation curves. And when you get down to these energy levels, you attenuate everything. So the lower energy your Bremsstrahlung is, the less likely you're going to see it. So even though this is the actual Bremsstrahlung spectrum, this is what we observe. And I haven't finished grading the tests yet. But I like I promised, for the two folks who do the best, I'm going to ask you to bring something in for elemental analysis. This is precisely what we're going to see. You're going to see this Bremsstrahlung which is not the actual spectrum coming out, but this has to do with the absorption of x-rays in the detector window, as well as some self-shielding. If we're using a scanning electron microscope, which is nothing more than an electron gun, and you're firing electrons to some distance in the material where they'll then interact and send off x-rays, you've also got this part of the material to contend with, some self-shielding. So not only do the x-rays all have to get through the detector window to be counted-- so the high-energy ones, which we'll have with small wavelength, get through, but the low-energy or long-wavelength ones might get stopped. You also have to get out of the material itself. The electrons don't just produce x-rays in the outer atoms of the material. They go down a micron or two. And then the x-rays that are produced in those interactions have to get back out again. So it's interesting. It's kind of like the inverse photoelectric effect, right? In the photoelectric effect, photon comes in, electrons come out. In a scanning electron microscope, electrons come in. Photons come out. Many of them are these characteristic x-rays. Because now if we start to review what sort of interactions are possible when we fire electrons into material, we've just gone over Bremsstrahlung. And we know that with higher and higher energy electrons, you're going to get more and more Bremsstrahlung. But you're not going to see the actual x-rays produced at low energies no matter what, because this isn't just a system on paper. It's real life. And you're going to get characteristic x-rays that come from energy transitions. So if you fire in an electron, and you happen to undergo one of these ionization collisions, you might just knock an electron out. So let's say an electron comes in, knocks an electron out. Then another electron fills that shell, giving off-- in this case, it would be a k alpha or a shell 2 to a shell 1 X-ray the way I've drawn it, which is why Julia has got everything right on the spectrum here. There's the Bremsstrahlung, and then there's these characteristic peaks. The background is due to radiative stopping power, and these characteristic peaks give away some of the ionization stopping power. And so all in one spectrum, you can see just about everything going on in this material. The last thing that you can't see that I would be remiss if I didn't talk about it as a radiation material scientist is radiation material science, which really is concerned with mostly Rutherford scattering, Rutherford or hard sphere scattering. This is the last of the major interactions between charged particles and matter that concern us. It's not really in your reading except for, I think, being mentioned once, because they didn't seem to think it's important. But I happen to think it's extremely important, because this is the basis behind radiation damage. In all of these collisions right here, you have some sort of displacement of electrons. And those electrons can get ionized, and other ones will fill them back in the holes and whatever they'll do. But at no point were nuclei displaced. You can transfer a lot of energy without moving any atoms around. But when your energy starts to get lower, you end up with a new kind of stopping power-- let's call it nuclear-- which scales with, as always, a number density times pi. Let's see. Little z, big z, e the 4th-- everything looks pretty similar so far, except for now we've got the energy of the incoming material, and now we have a mass ratio. Because in this case, you're actually undergoing some sort of a hard sphere collision between one atom and the other times the natural log. This is going to look awfully familiar. It ends up being some energy term over-- actually, let's just go with-- yeah, gamma ei over some new energy. This thing right here is called the displacement threshold energy. And it ranges from about 25 to 90 eV, but it's usually 40 eV. And what that is-- it's the max-- the minimum amount of energy that has to be imparted to a nucleus smack head on in order for it to move from its original atomic position. And that's what's known as a hard sphere type collision. Or in this case, it's just like all the other q-equation-looking scenarios that we looked at before. So let's say the little nucleus goes off, and the big nucleus goes off. This should look familiar by now because I've harped on it probably too much. Now what I want you to consider is this big nucleus had a position. It liked where it was, and now it's been knocked away. What's left over is an atomic vacancy, which is the most basic building block of radiation damage. So sometimes it's neat to look at the ratio of-- let's see. Make sure I get this ratio right. Ionization on top. To look and see when is ionization versus radiation damage actually important. And the ratio scales with 2 times the mass of the nucleus over MeZ times their respective natural log threshold things. And that's the ionization potential and log gamma ei over ed. So what this says is that for higher energies, ionization is more important, and for lower energies, nuclear stopping power or radiation damage is more important. If we graph these two, let's say, on a log e graph-- and let's say we have our nuclear stopping power in blue and our ionization stopping power in green-- we end up with curves that look something like this. That's our nuclear. That's our electronic or ionic. So what this actually says is if you fire high-energy neutrons or high-energy protons into a material, it's ionization that does most of the damage at high energies until you slow down to around like 10 to 100 keV level, curiously very similar to this 500 times i bar, the mean ionization potential, at which point Rutherford scattering or hard sphere scattering becomes the dominant mechanism. And so what this says is if we want to draw a picture of what radiation damage looks like-- let's say we had a proton that we're firing into a material, and it hits some atom that we're going to call the PKA or Primary Knock-on Atom. That PKA then becomes-- let's say it was nickel. It's like a nickel plus 26 ion because you've knocked the nucleus out of its electron cloud, effectively, and it's now flying out through the material. That proton might go off to do more damage somewhere else, but it's not actually the protons or the incoming particles that do the bulk of the final radiation damage. The radiation damage is mostly self-ion radiation. Even though it all starts with the incoming particle, nothing would happen if the incoming particle didn't show up. Most of the final results of the damage are from these heavy ion collisions. And so we actually talked a little bit about-- I think we talked about when this ionization starts to pick up for electrons versus heavy ions. If you think about when electrons start to radiate away most of their energy, taking it away from radiation, it's like 10 to 100 MeV. What would be the case for a heavy ion, like even a proton? Well, what's the only thing that changes when you change from an electron to a proton here? AUDIENCE: The charge. MIKE SHORT: Well, yeah, the charge is the opposite sign, but of equal strength. But what else in this formula? AUDIENCE: [INAUDIBLE] MIKE SHORT: That's right. So for heavy ions, for even things like protons, you need to go at approximately 1,837 squared more energy than the electron. So we're talking in the gigaelectron volt to teraelectron volt range for ions. So this is why Bremsstrahlung is not important for any sort of ion interactions unless you are a high-energy physicist and you're working in the GeV or gigaelectron volt in the upper range. So we like to say in the radiation damage field if you want to know the total stopping power from all interactions, you have to take into account the ionizations. I'll just make that a minus sign for the symbols. The nuclear and the radiative. For most radiation damage processes except for high-energy electron radiation, we neglect that. The reason is that the radiative to ionization stopping power is pretty close to zero. It's like-- even at 10 NeV, it's like 1 over 2,000 squared-- or 1 over, I guess, 4,000,000. It doesn't matter at all. With heavier ions, it becomes even less of an issue because you can deflect a heavier ion less with the same Coulomb. And so what ends up only mattering is the ionization stopping power and the nuclear stopping power. It's this nuclear stopping power that leads to collisions. And it's like two of five of. So I want to stop here and answer any questions. And I'll hijack a bit of the neutron discussion on Thursday with some review of this and filling in the last gaps of radiation damage. So anyone have any questions from today? Yeah? AUDIENCE: Can you repeat what you just said about why the radiation term goes away? MIKE SHORT: Yeah. The radiation term goes away because of that. AUDIENCE: And that's under the assumption you're working with a proton or a heavier-- MIKE SHORT: Yeah. If you're working with an electron, then it actually does matter. If you're firing 10 MeV electrons into something, you must account for the radiative stopping power, because there's a lot of it. At 10 MeV, there's as much radiative as ionizing, and there's basically no nuclear yet. But for anything heavier-- even muons, which are approximately 237 times heavier, or protons, which are approximately 1,837 times heavier, it totally doesn't matter because it scales with the mass ratio squared. It might be 267 for muons. I forget that middle number. But still, 267 squared is a pretty big number. Was there another question here? I thought I had seen a hand. So remember, you guys had said you want to see some radiation material science or radiation damage. This is where it comes from. This is why I love teaching graduate radiation damage and 22.01 at the same time-- because they're the same thing. Except you guys get the derivations, and in the grad class, I say I assume they know it. And then in the homework, I find out they don't. But it doesn't matter because they're supposed to. At least you guys will, so you've got the power. And knowledge brings fear, as I like to say. OK. I'll see you guys on Thursday when we'll wrap up a little bit more radiation damage, because I can't resist. And then we'll start moving into neutron interactions, which is kind of taking a step down from here, because there aren't really any electronic interactions. But because we can deal with enormous populations of neutrons, things are going to get messy. Have you seen the equation shirts that we have here, the neutron transport equation shirts? Yeah. We're going to derive that on Thursday. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 29_Nuclear_Materials_Science_Continued.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: Today I want to pick up where we left off-- well, to remind you where we left off last time, we were watching videos of crushing things with the explicit purpose of understanding material properties, so that we can talk a little bit about radiation damage and nuclear materials. Since I got more than a few requests to say, what's all this nuclear materials? It's like the biggest research field in the department, and yet, it's not talked about in 22.01. Well, now it is. So we talked before about all the different stages in radiation damage from creation of single defects to their clustering into things like voids or loops and super structures that have end up having macroscopic effects on material properties, and all of them is due to the production of crystal and defects due to radiation. And to refresh your memory quickly, I'm going to jump ahead to the stress strain curve that we were looking at before we started watching videos of crushing things, to remind you about the different material properties and what they actually mean. So anyone remember what we mean by toughness in relation to this curve? AUDIENCE: The area [INAUDIBLE] MICHAEL SHORT: That's right. The amount of energy it would take to actually cause this material to fail. That's a measure of toughness. How about strength? Remember this curve is stress, which is a force per unit area, versus strain, which is an amount of elongation. The strength of the material is how much stress you can put in until it starts to either plastically deform or it hits its UTS, ultimate tensile strength, where it will just fail. How about ductility? What do we mean by that? Either intuitively or on the curve, yeah? AUDIENCE: How much you can stretch it? MICHAEL SHORT: Exactly. How much you can stretch it before it fails, indicated by this point right here. The strain to failure would be a good measure of activity. And finally, stiffness. What do you think? Stiffness is more of a response function, so it's how much does it deform in relation to how much stress you put into it. So it's the slope of this part right here, so that's why I want you guys to know that we mean actually different physical things by these properties, which will be important to note when we start to discuss what radiation damage actually does. So the basic mechanism of radiation damage is like you might imagine. Let's say this green particle is a neutron or a heavy ion or a proton or an electron or anything. Anything that's energetic enough to cause atomic displacement. So as that neutron or whatever enters, it will strike some of the atoms in this perfect crystal, creating what's called a primary knock on atom, or PKA, for short. And then that neutron and the released PKA will go on to hit more and more atoms, resulting in what we call a damage cascade, leaving behind a lot of different types of defects. We talked about these last time, but I'll just refresh your memory. A vacancy is a type of defect that, well, it's not really a thing, right? It's just the absence of where an atom would have been, but we refer to them as defects of their own that can diffuse and move because let's say another atom moved into the position of this vacancy. Then we can say the vacancy moved to that atomic position. There's also interstitials, or atoms where they shouldn't quite be. And this leaves behind a whole bunch of damage that we quantify using a measure called DPA, or displacements per atom. It's a simple measure of how many times has every atom left. It's a lot of sight. That's it, though. It's not actually a unit of damage, and I'll be giving a talk at MRS, the materials research conference tomorrow, railing against this DPA unit because I'm going to explain this a little bit right now. What is a DPA? A DPA measures the number of times that each atom has moved out of its original site, but it has nothing to do with how many times it stays out of its original site because a lot of those atoms will get knocked away and then just move right back, but the DPA part only measure is what we call the ballistic stage of radiation damage. Let's see if this works. I've just realized I can jump back to a slide without inducing epilepsy. Yeah. So what DPA actually measures is how many times does this process happen? How many times do the atoms get knocked around? But it says nothing about where they end up, and that's the really interesting part about specifically radiation material science. Because let's say one of these interstitials were then to combine with one of these vacancies. It's like they were never there. Even though they were displaced, and would be counted as part of the DPA, or the radiation damage dose, the net effect on the crystal material is nothing. So let's say what really the DPA is. It's a simple formula that I think you guys may recognize. This look familiar from all of neutronics that we've been doing? It's yet another reaction rate. It's an energy dependent flux times another type of cross-section that we call the damage displacement cross-section, or sigma D, and it's integrated over your entire energy range, and that's all there is to it. So with what you know 22.01, you can understand the basic unit of radiation damage. As you might imagine, we've had four lectures on neutronics, so if you can understand all there is to know about DPA after four sophomore lectures, it's probably a pretty simple unit. You're right, it is. What goes into this damaged displacement cross-section is also something that might look a little familiar is a cross section that says, what's the probability of some particle coming in with energy E and imparting kinetic energy T to another struck atom? That comes right from-- remember our treatment-- I think I've drawn this probably 50 times now. Our hollow cylinder treatment of a charged particle with charge little ze interacting with a particle a big ZE at some impact parameter B. We wanted to know well, for all possible approach paths, the area of this hollow circle, or the probability that this particular approach path is taken, is just the area here 2pi b db. With some constants in front of it, which actually is that cross section what's the probability that our particle goes in with energy E and imparts kinetic energy T? It's directly related to that impact parameter B. And this is the same thing that you're seeing right here. You then multiply by this little function nu of T, which represents the amount of damage, or the number of displacements done, for each one of these reactions. And there are simple models, there are mostly linear models for-- if a particle comes in with energy E, leaves with energy T, how many displacements happen? It's a pretty simple linear piece-wise model, and that fairly well approximates the number of displacements that happen, but I want to get the idea of DPA versus damage. They're two very different things, and they're often equated. Much like the material properties of strength, ductility, hardness, and toughness are equated in colloquial speech, but that's absolutely wrong. So is the idea of DPA and radiation damage. Because DPA, again, just measures the number of times that an atom is displaced. Damage is some measure of the number of messed up atoms at the end of the game, and they operate in very different timescales. It takes femtoseconds to picoseconds for a damage cascade to happen. So the DPA is all over in less than a picosecond, but it can take years for all these different defects to diffuse, to cluster up, and to form these super structures, and actually end up causing the damage that can lead to material property degradation. So what sort of factors would affect the speed at which these different defects end up finding each other? What could you vary about a material or its environment to change the speed of these atomic diffusion jumps? AUDIENCE: Temperature. MICHAEL SHORT: Indeed. Temperature. Reading off my list-- well, the whole list jumped up. OK. You got the first one. What were you going to say? AUDIENCE: I was going to say temperature also. MICHAEL SHORT: OK. Yeah, absolutely. Temperature determines diffusivities. It also can change phases or crystal arrangements, like for the case of anything iron-based. The dose rate, the rate at which those neutrons come in can change the rate at which the defects cluster up. Chemistry, if you have solute atoms, which I've drawn here. You may have let's say chromium atoms and iron, and the chromium atoms are a little bit bigger. Defects may be attracted to or repelled to those extra solute atoms, changing the way that they interact with each other. And then micro structure. Things that are bigger than on the order of atoms. Grain boundaries, dislocations, all of those defects that we talked about last time, just to refresh your memory of what those are. We have been talking about zero dimensional defects like vacancies. We spent a while on dislocations, these one dimensional defects that other defects can be attracted to. We saw an example of a two dimensional defect, known as a grain boundary, where you can see this line between different arrangements of atoms. And there can be three dimensional defects. Like inclusions of some separate face sitting in the material. Like the manganese sulfide we found in the Alcator fusion reactors power rotor. And all of the presence and density of all those different defects can be quite strongly influenced. Let me start that sentence over. The movement in clustering of those defects can be quite strongly influenced by the presence of all those other defects. So again, the DPA actually tells us this part of radiation damage, and that's what we tend to simulate with these ballistic binary collision approximation simulations, where we just say like billiard balls, how many atoms knock into each other? What it doesn't tell us is everything else, and it's the stuff that happens here that can tell us will our materials fail in nuclear reactors? And there's evidence for this. I'm not just ranting against it, no I am, but I'm doing so with evidence, so it's justified. So here's a nice experiment I like to show in every talk for this case. These folks took pure nickel and put it in the same reactor, at the same temperature, and got the same amount of swelling. All the conditions were the same. Same temperature, same materials, same microstructure, same reactor, same neutron energies. Just a different dose rate. A 30% difference in the rate at which neutrons arrived at the nickel, and they get the same result in void swelling, one of those bad things that happens, at two and a half times the DPA, which tells us that there's a very strong dose rate effect for material damage. So if you want to answer the question, well, how much dose does it take to reach 3% swelling in nickel? Can't answer that question, you don't have enough information. Even if you say, how much dose does it take with one of the neutrons at 600 Celsius in this one reactor? You can't answer that question. Kind of tricky. And a lot of the rest of nuclear materials data looks something like this. Now, I don't want you to worry about what the axes say. They're not readable because they're not important. What I do want you to know is what's the quality of this data set you see? Would you be bold enough to draw a trend line through a single data point? No. What about three where it doesn't actually match up with one of them? Or is there any reason why you think they made this parabolic instead of a linear line? I can draw a line that would fit between the error bars of these two right here. So the trick is doing these experiments is extremely difficult and expensive. So just throwing something near the MIT reactor for a month, because we did this, we took a few hundred milligrams of copper, aluminum, and nickel, threw it in near core position of the MIT reactor, and that cost $40,000, and that did about 0.002 DPA, or about the dose that you'd receive in a normal power reactor in one day. If you want to actually say how long will it take to get materials to the end of their useful life, this tends to be anywhere from 10 DPA in light water reactors, to hundreds of DPA in proposed fast reactors to 500 DPA for TerraPower's traveling wave reactor. Now, I don't particularly have-- let's see what's 500 divided by 0.00-- I don't have 10,000 years to wait for the final answer. The best we can do right now is to stick them in a reactor called BOR-60 in Russia. I've actually been there. It's in the very Western edge of Siberia-- I don't know if you could call it that-- in a city called Dimitrovgrad. They have a sodium cooled fast reactor. For those of you who are wondering when our advanced reactor is going to be built, they are built. Just not in this country, not very much. But Russia's got a fleet of sodium cooled fast reactors that can get you 25 DPA per year. And if your reactor is going to go to 500 DPA, you have to know whether or not your materials will survive, you have to wait 20 years for the answer. So what investor is going to be like, all right, here's $10 billion, but I can wait 20 years for a return on investment. No. I can wait 20 years to start building the reactor, which means 40 years for a return on investment. Chances are, if someone's got $10 billion to give, they're going to be dead by the time they get a return. So this is a no win proposition. So what we really need to know is what is the full population of every single type of defect in an irradiated material? That's what I mean by damage. Did I show you guys this movie yet? The orange one? We've talked about vacancies in an abstract sense, but this is a movie of one of them actually moving about on the surface of germanium. So this is a scanning tunneling microscope image-- I think that's what it stands for-- and these are atoms on the surface of germanium. And that right there, that darker orange thing moving about is vacancy diffusion. It's actually happening. You can see it in real time. AUDIENCE: Is this real time then? MICHAEL SHORT: Pretty much, yeah. So I think this was-- yeah-- 30 frames a second, or so. Anyway, I don't remember exactly, but I'd say that's why I always reference everything in the presentations. I encourage you guys to look it up. And then the only reason these slides aren't up yet is because they're 300 megabytes, and I didn't have the bandwidth to upload that from my house. Now that I'm on campus, I can get a 300 meg presentation up there because it's full of movies. What sort of things could happen to these defects? So radiation produces all these crazy defects, then the DPA description is over. What could happen next? [INTERPOSING VOICES] MICHAEL SHORT: Sorry, Jared, and then-- yeah. AUDIENCE: Material could crack. MICHAEL SHORT: Material could crack. That would be the worst case scenario, but that is indeed what happens in the end, and I'll show you some pictures of that actually happened. Yeah? AUDIENCE: You mentioned that displaced atoms can find their way back? MICHAEL SHORT: Yep. So they could recombine with different types of defects and annihilate each other. If you have a vacancy and an interstitial nearby, the interstitial can plug the hole of the vacancy and your left with another perfect crystal. But now what happens if two vacancies find each other? Then you've got the makings of a void, or we then call it a small vacancy cluster of two vacancies, but it's actually more stable for these vacancies or interstitials to find each other and make these larger defects than it is for them to sit alone in the crystal structure. So there is a thermodynamic driving force bringing them together, and then as those defects build up, then what Jared said could happen. You could crack the material because it could get weaker, less ductile, less tough. Weaker is the opposite of strong, and what's the other one? Toughness-- oh, and harder, actually. So the origins avoid swelling I'll start with the humble vacancy. A void is nothing but a bunch of vacancies or a pocket of vacuum or gas in a material, and it all has to start with these single vacancies. As they cluster together, they reached this threshold in terms of free energy where putting a few of them together is not quite energetically favorable, but it's not so unfavorable that it never happens. So once in a while, you'll get a few vacancies to come together, and that cluster will survive for a little while. All the while, you're making more and more vacancies nearby, and if it gets to a certain size, that free energy goes negative. And when the free energy goes negative, it becomes stable on its own, and then that void will simply continue to grow, and grow, and grow. And so there's this process of absorption and emission of defects by larger or smaller void, so if you have a whole bunch of voids near each other, some of them can be emitting vacancies, which can be captured by the other ones, and this is part of why they don't all just disappear at once. They have finite lifetimes, long enough that you can build them up to the size where they become stable. Then this free energy eventually curves down, becomes negative, and then they just stick around, and we've actually seen these clusters or voids diffusing, so it's not like vacancies alone are the only thing that moves. We've actually seen clusters of defects diffusing, mostly in one dimension, but what you're seeing here is a TEM, or transmission electron microscope image, of one dimensional diffusion of a vacancy cluster. That little black blob right there is a pocket of vacuum that's moving back and forth. And if it happens to find another pocket of vacuum, it could then combine to a bigger pocket, becoming a bigger and bigger void. The other problem too is that most materials generate helium when you irradiate them with neutrons. Did we go over the what's called the N alpha cross-section? Does that sound familiar to anyone? All right, I'm going to pull up Janice like I do pretty much every class. Let me show you what's going on. But this is an important one to note. Because a pocket a vacuum is not that stable, but if you get a little bit of gas to stabilize that pocket of vacuum, then that pressure differential goes down and that void becomes a bubble, and that bubble is more stable. Good. Because on the last isotope, where I was showing someone [INAUDIBLE] alpha cross-section. How convenient. So among the millions of cross sections that we've gone through, there is this one right here called Z alpha. So what-- Yeah? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Got to clone the screen. That's right. Thanks. There we go. So there's this one here called Z alpha, which means neutron comes in, alpha particle goes out, alpha particle is just an ionized helium ion, which very quickly pulls in two electrons from anywhere else in the metal and becomes helium gas. And this cross-section is not zero, especially at higher energy starting around 2 MeV, there is a small, but non-negligible chance that a neutron will go in, and a helium atom comes out. And those helium atoms have nowhere to go, they find the easiest place to sit, that happens to be pockets of vacuum. Voids. And what that actually does is stabilizes those voids so the curve I showed you back here. This is the case of free energy for a vacuum pocket of void, and that free energy gets lower and lower as you start to fill that void with gas. So as the voids fill with gas, they become more and more stable, and a lot of materials generate their own gas. So that's that, and they end up forming these bubbles. You guys remember, last time I showed you a bunch of voids. They look like diamonds all aligned in the same direction. What do you see different here? They're not quite circles, right? But they're kind of round edge squares. They are also all pointing in the same direction. If you look carefully at this one, this one, the one above it. And the big one over there. It's harder to see for the small ones, but for these three big ones, you can see that they're all rotated in the same direction, giving away the crystal orientation of the material. But the reason that they're starting to swell up from diamonds into bubbles, is they're full of gas and they stabilize the voids. You can also get dislocation buildup. Normally, you would have to deform a material to create and move dislocations, but when you apply radiation, you can just create dislocations. I mean look at that. This is kind of cool. You've got a dislocation source right here. Every one of those lines you see is a dislocation, and you can see it's spiraling out and ejecting dislocations from this one little spot. Any combination of small clusters can collapse into dislocations or the stress induced from irradiating things can cause more stress that can move more dislocations. You create what's called this network forest of dislocations that makes things a lot harder to deform. So let's see, I want to show a couple more videos of this because it's very clear in some of these. You can actually see along these lines right here a few different orientations of dislocations, and if you watch up here, you can see some dislocations moving, and then there's a source that emits more right there, and so all the time, you're creating dislocations that are being emitted from different places and colliding with each other. The trick that we didn't talk about yet, is when dislocations from different directions collide, they get stuck. And when they get stuck, they can't move. And when they can't move, you shift the balance from slip to fracture, which means, like Jared said, it's easier to just break something, rather than plastically deform it. And the effects of this are things like stiffening. An increase in the Young's Modulus. Because if you remove some of the compliance from the material or make it stiffer by injecting all sorts of different defects, it takes more stress to impart the same strain. That might not be a bad thing on its own. Your materials get stronger, that sounds like a good thing, right? Not always because it doesn't just come as stiffening. But now from an atomic point of view, why does this stiffening happen? Anybody have an idea? I'm going to jump back to the stress strain curve. So the stress, or the yield stress of a material, is usually defined as this point right here. When you go from elastic reversible deformation to irreversible plastic deformation. If something gets stronger, it means that this yield stress goes up. And if something gets stiffer, it means the slope goes up. These two tend to happen at the same time. So if something is stiffer and stronger, then the stress strain curve would be drawn more like that. And what actually physically happens at this point? This is when dislocations start to move. Dislocation movement is irreversible. You can't just snap it back when you relieve the stress. And by making something stronger and stiffer, you make it more difficult for those dislocations to start moving. And you can do that by throwing any defect in their way. And since radiation creates pretty much any and all defects, it's a great way to stiffen and strengthen the material. So one of the reasons things get stiffer and stronger is, remember before we showed you that video of dislocation sliding through material? Folks don't remember. I'll bring it up right now. I didn't see a lot of shaking heads. This one, right? This one right here. So remember, before, we showed you the way that dislocations move. So the ways that materials can deform without breaking. If you throw anything in its way, you're going to make this process a lot more difficult. And all radiation damage does is throw things in the way. So if you throw absolutely anything in the way from solute atoms to interstitials to vacancies, all of a sudden, it's harder for that dislocation to move because some of those bonds are stretched out, or there's a few extra atoms in the way. And you can then start to create what's called little pin sections called jogs. Let's say a little vacancy moves over to this dislocation, meaning that it goes up by one atomic position. Then you've got pieces of this dislocation that are not in this preferential slip plane, and they get stuck. So all of a sudden you go from a completely gliding-- or what we call glissile dislocation-- to one that's stuck, or sessine, those are the actual material science words that we use. And what it ends up leading to is a strong loss in ductility. At the same time as things get stronger and stiffer, they tend to get much less ductile. So what you're looking at here are fuel shrouds. These are fuel boxes that surround the fuel pins in a Russian sodium fast reactor, and usually what you do is you would grab onto this piece right here and lift up to remove this fuel from the reactor during refueling. What happened here is they grab that, they started to pull, and they heard a little clink, and up came half the fuel box, and the fuel stayed down in the reactor with no way to pull it out. So this is the reason why radiation damage is such an important field of study is you might not know anything has happened until you shut the reactor down and go to take out the fuel and realize that you can't because everything is as brittle as glass. This is actually a talk I saw yesterday. We had a summer visitor in our lab from Kazakhstan. And most of their radioactive materials came from this reactor that shut down in 1999 on one side of Kazakhstan, and they wanted to transport those materials to the other side. So they hired the cheapest truck drivers to go on the bumpy roads. And the scientists were freaking out, because only they knew that all of the metal that all those guys thought was going to be ductile like metal was more brittle than glass. And any sort of bump would cause just complete shattering of this metal and catastrophic release of radioactive material. So this took them-- let's see-- I think Kazakhstan is smaller than the US. So who here has done a cross-country trip? How long it take you? AUDIENCE: Six days from Seattle to here. MICHAEL SHORT: OK, and did you stop along the way? AUDIENCE: Yeah. MICHAEL SHORT: OK, so this trip took them 13 days because they went slow. And I don't think the roads in Kazakhstan are as good as they are in most of the middle of the country. The coasts are terrible, but the middle is pretty good. So this trip went slow because the scientists said this happens. You should be careful. And luckily, there were no problems and no release of radioactive material. Pretty cool. What you want to happen is for dislocations to move on the easiest planes. And so what I have redrawn here is, let's say you've got a bar of some metal, some face-centered cubic metal, as you pull on it, like this, it will actually deform at about a 45 degree angles. You might wonder, why does that happen? But if you look carefully here, what's the closest packed plane of atoms that you can see? It's not this plane normal to the stress direction. It's like this one or like this one, and so you actually end up getting deformation in what's called slip plains, or the easiest directions for things to deform. And without going into any of the math or atomistics, I just want to show you some examples pulled out of, again, the fusion reactor. So this is a piece of rotor steel from the same Alcator rotor where we found that inclusion. We were pulling it in this direction, and look what formed. All of these slip bands at 45 degree angles, showing you that just because you pull in something in one direction, doesn't mean it deforms in that direction. It deforms and little slices in the direction that dislocations can move the easiest. So when you actually pull on something, to show you diagrammatically, it deforms something like that. You get a mixture of bending and rotation to make it look like the bar is bending uniformly straight, but on the microscale, it's not. So this is a pretty slick image of a single crystal of cadmium being pulled in this direction, and you can actually see every plane there, that's a slip plane. That's a plane where dislocations have been moving all the way to the outside of the material, which is pretty cool, and this is the process that you want to happen. Anything in the way of those dislocations, you don't start forming these slip bands, and you'd make it more preferable that the thing will just break and fracture. To show you some extreme examples of slip, that's when you have to go nano. So these are some pillar compression test that used a focused ion beam, which we will be using to top off our study of electron interactions with matter and ion interactions to take a piece of metal and carve out a little cylinder. And all they did is smash. They came down and compressively pushed on the cylinder, and look how it deformed. Not in the way you might intuitively expect, if you don't know any material science. So it deformed all on 45 degree angles and very weird compression. Not actually weird, if you know what's going on. there's lots more neat examples of this. If you don't push too hard, you can actually see these perfectly symmetrical slip planes at 45 degree angles to the axis of compression, and you know every one of these pillars you see this happening. And this is what you want to happen to nuclear materials because you're really trying to balance this between slip and fracture towards the direction of slip. That means that something will deform a little bit before it just shatters like those channel boxes from the Russian reactor. Any questions on what I said before I go on to the macroscale properties? All right, and let's get into the real world stuff. What actually causes this embrittlement? Well, there's a few things. Remember we saw videos of those dislocations in a traffic jam. If not, then I'll refresh your guy's memory. It's a phenomenon called pileup, let's see. There is the traffic jam. Do you guys remember this video now-- I know it's been a week, but these dislocations are moving and feeling each other's stress, and so they can't move as easily as they would want to, so you end up with a phenomenon called pileup. This happens both near other dislocations and near any other defect that gets in the way, like a grain boundary. So for smaller materials you end up with more of this pileup, and they tend to be of a fair bit stronger, and a fair bit-- they can be less ductile, with some exceptions. I won't say they're always less ductile, but if you put anything in the way, notice, this just says barrier. Any other defect can act as a barrier. And this ends up shifting what we call the ductile-brittle transition temperature. This is the property that people worry about for reactor pressure vessels, because you would want the pressure vessel, which in cases the entire core of the reactor, to always be ductile in the worst possible situation. The worst possible situation is on the absolute last day of operation at the coldest temperature it could possibly be. As you guys know, or you probably know, when you make something colder, it tends to be more brittle. So who's frozen stuff in liquid nitrogen and broken it before? Good. I'm glad to see a few hands, so what do you guys freeze and break? So just like froze a bottle of Pepto Bismol and shattered it? Nice. That must have been a fun mess to clean up. Yeah, what about you, Sarah? AUDIENCE: A banana and a coin. MICHAEL SHORT: Cool, OK. And we're able to break the coin? There you go. So normally you'd be able to bend a coin, or if it's a one yen coin, you can bite through it. Not when immersed in liquid nitrogen. What about you, Charlie? AUDIENCE: Flowers. MICHAEL SHORT: Flowers, OK. Classic. So take a rose and shatter it. Yeah. All these normally ductile materials become extremely brittle when they get colder. So do reactor pressure vessels. The problem is, they don't just get brittle when they get to liquid nitrogen temperatures. At the end of their life, they can be brittle at room temperature from radiation damage. So there is what's called a ductile-brittle transition temperature before a pressure vessels are irradiated, there's about this 50% line, whatever this temperature was where you have let's say, 50%, or 10%, or 0% ductility. Whatever measure you're using, you say, OK, we always want to make sure that it's got a certain amount of energy absorption capability, toughness, at a certain temperature. And as you irradiate that vessel, this shifts over this way, and this upper amount of energy, this USE, or what we call upper shelf energy, because this looks like a shelf, goes down. What you want to make sure is that this change in ductile-brittle transition temperature never reaches room temperature. You might think, oh, it's OK, reactors run at 300 Celsius, and things are pretty ductile right there. Well, you usually have to shut the reactor down once you're done with it to refuel, and if something goes wrong, there's a pressure spike, you can have a condition called pressurized thermal shock, or PTS. In that case, you would have a sudden pressure wave from, let's say, from steam explosion or whatever you could have, and you want that vessel to be able to absorb that energy instead of breaking in half, because if you break it half, that's a radioactive release. The way you test ductile-brittle transition temperature is what's called a Charpy impact test. It's probably the highest tech, lowest tech test I've seen. You simply hit things with a hammer. A very well calibrated, precise hammer. Let me pause so you can see what the sample looks like. You have these little bars with a notch in them. The notch is to make sure that acts as a stress concentrator, and you know where the breaks going to happen. So in a Charpy test, you line up this little sample, and you've got-- actually in your reactors, usually, you've got pieces of the pressure vessel in this form lining the inside rim of the reactor pressure vessel. So every time you refuel your reactor, you take off another few of these out and you hit them with a very well calibrated hammer. And you can measure by actually turning this dial and letting the hammer turn it as it moves through the material, you can see how much energy was absorbed by the material as the hammer comes back up. So it breaks right through the material, in this case, it's in a quenched or brittle condition, and for some reason, they have a lot of footage of the guy standing and not breathing, but what I want to show you is what actually happens here. I didn't make the video, I just got it. There we go. So what you can see is that if the hammer were to move through air with absolutely no drag, it would come back to the zero position. If they had encountered some resistance, like with a piece of steel in the way, it then measures the amount of energy in joules that piece of steel absorbs from the hammer blow. The larger that is, the better. And by doing this test at a number of different temperatures, you can recreate this ductile-brittle transition temperature curve. So they'll take a few Charpy coupons, they will test them at, let's say, every 25 Celsius, get a bunch of points, draw the line through the points, and decide where is the material brittle. At what temperature will it become brittle? To show you what something looks like when it's not brittle. The same test is done in what's called the normalized condition, where you simply heat the steel to a high temperature, relaxing out most of the defects, and bringing back as much of the perfect crystal structure as possible, which is really good for letting dislocations move through it. So the same test is done by the same awkward feller who likes to stand there and not breathe, but you'll notice a very different result of this test. Doesn't look like it, but if you actually look at how much energy was absorbed, much, much higher. So something like 18 times more energy, and you can qualitatively see the difference between these two conditions by looking at the fractured surfaces, and this is where it starts to get intuitive. Something that's ductile would tear more like taffy, where something that's brittle would cleave or break in half much more smoothly, so these are the two pieces of metal that we just showed. You the one that absorbed 180 joules by lots of defamation, and the one that absorbed 10 joules by fracture in a brutal way. This is what you want your reactor vessel to behave like, but the problem with these ductile-brittle transition temperature curves, is this not just this part that you're worried about, it's that part. So even at high temperatures, things get less ductile. So it's a combination of temperature and number of defects. And if either one of these criteria fails, if you become too brittle at low temperature, or your total ductility at high temperature goes down too much, that's the end of life of your reactor vessel, and this is one of the biggest problems in life extensions of light water reactors. They were built for 40 years and they originally had license for 40 years. How many of you guys have heard of the license extensions going on now, to 60 or 80 years? Yeah, so a few of you guys. I've heard heard, why not run the reactor longer. Not build a new one, but keep getting all this clean green cheap nuclear energy? This is why. You have to be absolutely sure that your vessel, your primary containment, will survive. And we're not so sure because well, we jump to the part of the video that's got the Charpy coupons. Those. We ran out. We only plan to put these vessels in service for 40 years, and folks put 40 years worth of these coupons, plus some extras, in the reactor vessel. Now, in order to prove that it's actually safe to continue operation, you have to have some amount of material to test and say, OK, this vessel is still ductile, it's still going to work. What happened? What do you do when you run out of coupons? Anyone have any ideas, because I'm sure the industry would love to hear them. AUDIENCE: Would you have to start using material from the vessel itself? MICHAEL SHORT: You could start using material from the vessel. That's actually what I plan to do too, but with some very strong caveat. So if you were to scoop out a piece of the vessel, you then create a stress concentration. In addition, reactor vessel looks like a gigantic forging of really thick carbon steel with a very thin liner of stainless steel. And the stainless steel is there to prevent corrosion from the reactor water. That thin, quarter inch bit of stainless steel is what actually saved what could have been one of the worst nuclear accidents in US history, the Davis Bessie plant, where there was a crack in the vessel. Boric acid actually ate through a whole chunk of the pressure vessel, leaving the stainless steel intact. And it's that little quarter inch stainless steel that saved the plant. But if you were to take something out from the inside of the vessel, the part that gets the most damage, you'd be taking out some of the stainless steel, which is a problem. You could take a piece out from here, maybe the outside, but then you've got a stress concentrator. Any sort of chunk that is missing is where a crack is going to preferentially form, so you would weaken that vessel by taking a piece out. Anyone else have any ideas? Yeah? AUDIENCE: Is it impossible to just replace the vessel? MICHAEL SHORT: Yes. A new vessel means a new reactor. So the license for the plant is intimately tied to the license for the vessel. Any other ideas? Yeah? AUDIENCE: The creation of the vessel, just put extra in there and then take that out. MICHAEL SHORT: That's what they did, right? So that's why these Charpy coupons were there, but what do we do about the vessels that we already have? Yeah? AUDIENCE: Can you make Charpy coupon or coupons that are similar to the status of the ones most recently taken out of the vessel, and then just put them in? MICHAEL SHORT: That's what they're doing. So they're taking these Charpy coupons, which this is bigger than actual size. They break them, so let's say this region's all garbage, and then they cut a little mini Charpy coupons out of the last piece, and they're putting those back in. So that's absolutely right. You've just probably recreated a year's worth of licensing work and ideas and in a class. But I just want to get back to Charlie's idea because that's what I think has to happen is you'd like to be able to take a piece out from the actual vessel and run a test on it. The only way to do that is to go nano. Take the tiniest, tiniest little piece out, and perform some other sort of measurement. So this is the idea that our group has had in using what's called stored energy of radiation damage, so I don't mind telling you about it, even though it's not like funded or papered yet because it's educational, and it's cool. So every kind of defect takes energy to create. Defects don't just create themselves. You either have to raise the temperature of a material or in our case, irradiate it. And it's the energy of those incoming neutrons that bounces around different atoms and creates all these different types of defects. So those defects are storing energy in the material. And so if you think about how much energy does it take to destroy something, it would have to be the energy that it's already stored plus the energy that you put into it during the test can reach the failure energy. What if you could measure the stored energy? What if there was a way to know how many of each of those defects there actually were in a material? We think there is. Well, we know there is. It's called differential scanning color imagery. It's a way of measuring the change in heat capacity of a material, where you take two very small furnaces-- you don't have to put this in your notes, by the way, this is just for fun. You take two small furnaces, put your chunk of your material on one, and you apply a lot of heat to both of them. And you look at the difference in the amount of heat you have to put in to keep the two at the same temperature. So normally, you would get the heat capacity of a material, how much heat can extort per degree Kelvin. If this material's got a bunch of defects already in it, then you should release that defect energy by heating it and that would take a little less energy to heat it up, but there's a lot of problems with calorimetry, so we're actually using what's called nanocalorimetry. We're doing this process on nanograms of material and seeing if you can irradiate something and measure its stored energy because if you could, you could take a tiny little razor blade, take out the smallest sliver of the vessel-- smaller than a grain of sand. Not enough to cause a crack-- enough to measure its stored energy. And I want to show you guys what this process looks like. So I'm just going to deviate from the actual lecture, and jump into the topic I'll give tomorrow. I think it's more interesting and more relevant. There we go. I'm going to skip through some of this stuff, but it's within the last five minutes, I'll try to get through this, see if this is a record. It's what we call the ultimate snipe hunt. Has anyone here been on a snipe hunt? What's a snipe hunt? AUDIENCE: People bringing you in the woods and tell you you're looking for a bird that doesn't exist. MICHAEL SHORT: That's right. Pretty much this, right? They say bang a bunch of sticks, get a bag, and go look for snipes. They don't exist, right? They actually do. Snipes are real. You pretty much have to be British to know it, because they hunt them there for sport, and apparently, they're delicious. That's actually where we get the term snipe because the actual size of the sniper compared to the sniper is about that. If you can shoot that bird with a gun, you are an expert marksman and deserve the delicious and tiny treat that you've then blown apart with your bullet-- AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah, exactly. So you can you know rain bird dust on whatever meal you've already prepared. That's what I like in finding these radiation damage defects too. Because some experiments have been done to plot the number of defects versus their size. And as the defects get bigger, the number of them decrease. So most defects are very, very small, and it turns out that-- first of all, the resolution of the screen is funny. I think I know how to fix that. Clone the screen and then jump back to presenter mode. That usually-- that's not what I wanted. That's what I wanted, great. Most of the defects that cause these reductions in material properties are too small to see, even in the transmission electron microscope. So I don't have to tell you this stuff again, that I don't like the DPA, I showed you that. What we want is that. We took some inspiration directly from the Manhattan Project. Luckily, I have an uncle who works at the DuPont Library and Dupont was quite responsible for the Manhattan Project. So this memo between Eugene Wigner and Leo Szilard-- one of whom won the Nobel Prize, the other one probably should have-- said, hey, radiation stores energy by neutron collisions like cold working and amorphization. So we've dug up this original memo from the 40s, and said, let's do this for everything because every defect has its unique amount of energy that it stores and creating it in some different amount of eV per defect, and we've done some molecular dynamics simulations to show that this amount of energy stored is pretty universal. When you irradiate something, we predict that it stores about 2% of its energy in radiation defense. So if you know the number of neutrons that hit, and you know that the amount of energy per neutron, you know how much you're looking for. You know what your signal should be. And to jump through to the whole idea of differential scanning calorimetry, it's like what I drew here, but a lot more legible. You simply heat two materials, one of which contains your sample, measure their temperature, and look how much energy it takes between them to keep them at the same temperature. We did some of these measurements on a piece of steel from the nuclear reactor, and we got a whole bunch of interesting looking peaks for the red curve compared to the blue curve in the other irradiated conditions, so we think there's something there. So we tried a more controlled experiment irradiating aluminum with helium ions and the accelerator in Northwest 13. And we were encouraged because the initial time that we heated this material, we got some stored energy out of it in some funky spectrum that might tell us what the defects are. The bad news is we got that with the control heat too, and the really bad news is that when you normalize all these curves, you get something that you can't tell if I drew it or my son drew it. Looks suspiciously like the doodles that he does, not scientific data. And the problem is that DSC, differential scanning calorimetry, induces a lot of artifacts in the signal that we couldn't separate from the noise. So our solution was to go nano. To use a nano a DSC, or nano differential scanning calorimeter, that can heat about 10,000 times faster than a traditional DSC. So you can get your energy out from smaller materials way faster than these artifacts can manifest themselves. What we think is going to happen is that every one of these peaks here is going to correspond to one type of defect that's released at a certain temperature. And by extrapolating, or say, integrating the area under those curves, you get the energy in each type of defect. And by extrapolating to a zero heating rate, you should know which type of defect they are. And if you know which defects you have and how many of each one, you know the full defect properties and material, you should know it's material properties. Because we already know if you have this many dislocations, it's this brittle, the question is how many dislocations. So we start off by-- we use a different kind of calorimeter. It actually fits on a chip. In fact, there's two on a chip. There's one that we put our material on, and one as a reference that we both put in the accelerator being irradiated at the same time to control for that effect, and this is what they actually look like. The scale bar here is 100 microns, and that transparent spot is a little bit of aluminum that we vapor deposited onto the calorimeter. Right there. And so the way this process works, is we take our DSC chip, we put a mask over it, vaporize some aluminum to deposit on one of the calorimeters, take the mask away, and irradiate the whole thing, and then finally put it in the nanocalorimeter, and I'll show you what happens slowed down by a factor of 1,000. That pulse right there. That whole thing just went from zero to 450c a millisecond. And the reason it took a second is because I've slowed down the video by 4,000 or by 5,000 times, and that little pulse of heat actually released some of the defect energy. We were able to see very clearly, the first time we heated the sample, this extra area corresponds to some sort of energy release. We then heated that same sample a whole bunch of times, and made sure that it was always the same, which meant we had a fully relaxed material. And it shows some sort of a trend. If you note this data was taken like two months ago, it's pretty fresh. Not published yet, so hopefully by the time this video comes out, it will be. And we see some sort of trend between the amount of irradiation it gives and the amount of stored energy you can get out of it. So this is what we hope can be used instead of those Charpy coupons. We can go much, much smaller and just take out tiny pieces of the vessel and get the same information as you would from a Charpy test but on the nanoscale. So the question then is, where is the defect fingerprints? Where are those individual defects that we were looking for? Well, I think they're just popping up right here. The reason for that is we picked a very fast heating rate for our experiments, and doing these sorts of measurements on other materials shows that at 10,000 Kelvin per second-- think about that for a second-- 10,000 Kelvin per second. So in a millisecond, something heats up by 10 K, which is ridiculous. Yeah, something like that. You don't really see any peaks. The heating is so fast that the defects don't even have time to find each other, annihilate, and release their stored energy. So we need to repeat the experiments at some lower temperatures, see what the peaks are, but if you go too low, you end up getting a lot of noise in your signal. So there's going to be some sweet spot that we haven't yet found in order to see this stored energy. So we're just at the very first experimental stage of trying to see can we extend reactor lifetimes. After doing simulations for a year that probably no one believes until you do an experiment, including me. But for now, it actually shows some sort of a trend, so it's just enough justification for us to buy one of these nanocalorimeters and start looking for real. So if you want to see, now I've taken you from basic material science, to where's the field going to, how do we keep our reactors running in about two hours. I think that's the most compact introduction to nuclear materials I can possibly give you. So any questions on what you've seen today? AUDIENCE: Is that roughly the trend you would expect? MICHAEL SHORT: Yes, I would expect an up. That's the best I can say. As far as is it actually a line? Is it a curve? I am not as brave or stupid as some of the other folks that will draw an arbitrary shaped line through a single data point. So I'm not drawing a trend line yet. Yeah, any other questions? Yeah? AUDIENCE: Is it-- or I guess you're making the assumption that one little spot in the reactive pressure vessel to say what the rest of the vessel has been exposed to? MICHAEL SHORT: Oh, not at all. Take a whole bunch. Then, instead of just doing Charpy coupons of one place, which is what we do now, you can get a map. We don't have that information now, but if you take pieces from all over the vessel, then you get an actual 3D map instead of a single point of saying all right, well, we picked what we think is going to be the worst condition. How do you know? You don't. How do you know for sure? You make measurements like these. Any other questions? Yeah? AUDIENCE: What would a peak in this graph correspond to in terms of like-- How does it relate to some sort of damage effect? MICHAEL SHORT: We would expect that a peak would relate to a certain type of defect reaction occurring. When some type of defect gets high enough in temperature that it goes from stuck to mobile, and as that moves, it encounters anything else it will in the material, and will react with all the other defects nearby, decimating the population of that defect and slightly depressing that of the others. And as you go higher, and higher in temperature, the slower and slower defects start to move. Yeah? AUDIENCE: So you can get rid of the defects by heating it quickly? MICHAEL SHORT: Mm-hmm AUDIENCE: Would there be a way to self repair our radiation damage to pressure vessels themselves? MICHAEL SHORT: That would be awesome. But the properties of the vessel are highly dependent on, not just its composition, but the heat treatment that went to make it. If you heat that vessel, you both remove the radiation damage and remove the strengthening put in by the forging and heating process. So you would have, if-- again, if you let's say, replace the vessel you have a new reactor. If you heat the vessel too much, it's no longer a code stamp vessel. Pretty tricky spot that we're in, huh? But we're trying to science our way out of it. Well, it's a couple minutes after. I don't want to keep you longer, but I'll open on Thursday with a little story about how mass attenuation coefficients can get you out of apartheid South Africa. I'm serious. And then we'll move into dose and biological effects. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 16_Nuclear_Reactor_Construction_and_Operation.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. TEACHING ASSISTANT (TA): So if you guys don't remember me, I'm one of your TAs. I'm TA number three, that's what I call myself. I'm Ka-Yen. And I'll be teaching your lesson today because, well, Mike's in Russia. So, yeah. Yeah, so I know you guys had your first exam a couple of days ago. How did that go? OK, sounds good. All right, so we won't talk about it. But because you guys just had a super intense exam, we just want to give you guys a break. So today, I'll be teaching you guys a little bit about nuclear energy. So this lesson won't be super in-depth. There won't be a lot of crazy intense math. Actually, there won't be any crazy intense math because we just want to give you guys a break. You guys are going to be starting up full cycle on Friday with really cool topics like stopping power. So for now, we're just kind of like-- it's a refresher. A couple of fun facts. A couple guys might already know some of the concepts that I'm mentioning, because you guys are intelligent people. But I walked into MIT not knowing a single thing about nuclear energy. I was like, I wish someone could have told me these things. So that's what I want to do for you today, OK? So I'm going to be talking about the functionality and the benefits and the problems associated nuclear. But first let's start with a very brief history in a nutshell. So between 1895 to 1945, that's really cool people were developing nuclear science. So people like Madam Curie or like Fermi, et cetera. They were all designing this nuclear science, like they were developing it, which is pretty cool. Most of this development happened between 1939 and 1945. Does anyone want to take a gander as to why? What? AUDIENCE: Manhattan Project? TA: Yeah, exactly. But what field of the Manhattan Project? AUDIENCE: Nuclear weapons. TA: World War two. Yeah. So World War two was happening during those times, and they were trying to develop the atom bomb, which is why the majority of nuclear science was developed between these like five or six years. Then 1945 to 1960. They've entered a phase of like, well, the war is over. Now what do we do with ourselves? So luckily we decided to redirect this science into using it for energy and harnessing it in a controlled fashion. So mainly the focus was actually for Naval submarines, but they also realize that we can use this for energy as well, for electricity as well. So there's a lot of really cool things that happened in between these years. So in 1951, the first nuclear reactor to produce electricity was the experimental breeder reactor, the EDR1 was developed and designed and operated, and actually kind of worked. It was created by Argonne National Labs, which is in Idaho. And they actually still exist. So if you want to go work there this summer, you totally can. And then in 1953, President Eisenhower, he created something called atoms for peace. So this is just a program that advocated using nuclear for things that were peaceful, such as electricity instead of nuclear weapons and stuff. Also 1953 was the creation of Mark 1. So Mark 1 is the first prototype Naval reactor that was created. It was created in March. And then finally in 1954, the first nuclear powered submarine, the USS Nautilus was launched, and is up and running. So lots of cool things happened between this time. But the real heyday of nuclear was actually between 1960 to 1975. So during the span of 15 or so years, this was the real commercial energy boom. People like Westinghouse were creating nuclear reactors. I think the first one was called Yankee Rowe. It's a 250 megawatt electric nuclear power plant, which is not insignificant for a time like the 70s. So other different companies and other different countries were doing this as well. basically, there was this huge boom in nuclear energy. So if you look at this little chart over here, this is nuclear reaction construction throughout the years. So if you look in this little chunk, you can see what a massive peak there was. This was when everyone was building nuclear reactors, people thought it was super jazzy, and everyone tried to jump on that. Unfortunately, all good things have to come to an end, though. From 1975 to 2002, which is about this chunk over here, you can see a massive decline. And you can see that nothing really happened between the 90s and the 2000s, other than the fact that we were all born. But no new nuclear reactors were being commissioned during this time. And then today, we're kind of-- I say we're back, but basically we're entering what people like to call a nuclear renaissance, which is between this chunk over here. You can see that there's been a slight increase in nuclear reactors being produced. But basically there's been a whole new push for creating more advanced reactors. And currently, China, India, and South Korea, they are the main players in this game. So China itself has 32 operate reactors operating at the moment, and have 20 more commissioned, like literally right now, which is kind of insane. So yeah, do you guys have any questions about this? Great. So what causes nuclear resurgence? This is the perfect time to talk about why nuclear power's cool. Again, you guys probably know this. But the main reason is sustainability. So right now we've entered a phase in time where people are starting to realize that we've done damage to our environment, we've got to fix this. So global warming is a thing. I promise you, it's actually a thing. And basically, we're looking for a way to produce electricity without creating such a large carbon footprint. So if you look at this chart over here, you can see that this is where nuclear lies in the amount of carbon that it produces per-- what's the unit? Per gigawatt hour of electricity. When you look at that in comparison to coal and natural gas, which is our two primary sources of energy at the moment, you can see that this is definitely more attractive. So the statistic is actually that nuclear creates 75 times less carbon emission than coal does, and 35 times less than natural gas does, which is incredible and amazing. So that's the main reason why we're going for nuclear. But there's other kinds of really good reasons. One is the amounts of power output. You guys actually calculated this yourself in pset 1. You know just how much power or energy comes out from one fission reaction. So just so you guys can double check that you got that statistic right in your pset, it turns out that It turns out that you get 3.5 million times more energy than burning one kilogram of coal does. So you can see that you definitely need a lot less fuel in a nuclear reactor than you do in a normal coal burning reactor. And then finally the last thing would be energy security. So one of the good things about nuclear is that it can serve as a good baseload source of energy. So if you're working in the energy sector you probably see this chart all the time of like time versus like energy that's being consumed. And it's kind of like this fluctuating little mass that stays fairly constant, but at certain times of the day you need more energy than usual. So this is just the energy demand during the day. That's what this chart kind of crudely depicts. So nuclear power is able to provide a good baseload source. That means it can provide conserve energy at a really high level all the time. So this is why we kind of want to replace coal and natural gas with nuclear, because it can take this role. Other alternative forms of energy might be better for the environment, it might be safer, and things like that, but it's not really able to do this. So for example, if you wanted to replace all the coal burning fire plants with solar panels, if it's not sunny that day, you're kind of out of luck, right? Like you can't produce energy if it's not sunny outside. Similar for wind. If it's not windy outside, you're not getting any electricity. Luckily for nuclear, it doesn't have to rely on any of these factors. You can continuously produce energy. Right? So do you guys have any questions about what I've mentioned? Awesome. So now we'll talk a little bit about reactor types. I'll just tell you guys about some of the main ones and how they work. So how people like to divide up the reactor types is in generations. So generation one, which, is all the way over there, that refers to the trial reactors. These are the ones that didn't really produce all that much electricity at all. They're more proof of concept kind of things, so that would be like the Mark I that I mentioned to you guys earlier. Now we move on to generation two. So generation two is actually what most of US reactors-- the category that most US reactors fall into. So these were developed between the '70s and the '80s-ish, and these are the ones that are functioning mostly today. And then we have generation three, three plus, and four. So these are the new types of reactors that people are trying to build on to create several improvements, but we'll talk about them a little bit more later. OK? So I want to start off with light water reactors, because these are the reactors that are most common in the United States. So light water reactors, or LWRs, are mostly broken up into two subcategories: boiling water reactors and pressurized water reactors. So how you guys can think about reactors is that honestly they're just kind of glorified steam turbines. That's what they're doing. So let's start with boiling water reactors. So boiling water reactors, or BWRs, comprise about 21% of the reactors that are located and working in the United States. So it's a really, really simple mechanism and we can walk through that right now. So over here, this little nubbin right over here, this is the fuel core. So this is what the inside of a fuel core looks like, that picture over there. So the fuel core is basically just a bunch of rods of uranium, sometimes it's clad in something like zirconium, and there's also control rods to help slow down the process. So uranium undergoes what? AUDIENCE: Fission. TA: Yes, fission. So what gets released during fission? AUDIENCE: Heat. TA: And? AUDIENCE: Fission products. TA: Cool. And? AUDIENCE: Neutrons. TA: Neutrons, awesome. So those three things are all flying around inside the reactor core at the moment as the uranium undergoes fission. So the isotopes, we just kind of let them be. Like I don't-- I'm not completely sure what we do with them. We might filter them out, but I think they just kind of hang out there. The heat obviously goes to create power. We'll talk about that in just a second. But the neutrons come flying around. So those other neutrons can simulate other fissions, and the control rods are there to make sure that there's not too many fissions happening in the fuel core at a certain time. Anyway, going back to the heat, the heat that gets created during these nuclear fissions, that goes and heats up the water. So this is just one loop of water, basically. So the water flows through the core and heats it up. It creates steam so the steam goes and spins a turbine. The turbine creates electricity. And it comes back and gets recondensed. That's literally it. That's all that happens during a BWR. Yeah, that's actually just it. So a cool thing about the BWR is, because it's so simple, it's also incredibly-- well, not incredibly, but it is the cheapest option out there for creating nuclear power. One of the downsides is just that it might not be as energy efficient as it possibly could be, or not be able to create as much power as it possibly could if it was a cooler technology. But yeah. Oh, and another downside is that because we have the nuclear material interacting with the water and-- so this is a coolant pump. This is a coolant tube. This is basically connected to a lake or an ocean or some other source of cold water, and that runs through the primary loop to cool down the water and recondense it into steam. If there ever is a breach between these two, the chances of leaking nuclear material into the environment exists. Like it's not high per se, but with BWRs there is a higher chance of leaking radioactive material into the environment. So that's one of the downsides of BWRs. Do you guys have any questions about this? Awesome. So I just want to show you guys this picture again, because here's the underside of a BWR. I make it sound like it's super simple and like a walk in the park, but this is actually the amount of technology that goes into one of these reactors. Like look at all those wires. I don't even know what they all do. But it's kind of insane. So the next kind of reactor that falls under the light water reactor category is the pressurized water reactors. So PWRs are actually more important, if you will, than BWRs. So remember, BWRs comprise about 21% of the reactors in the United States. PWRs comprise about 60% of the reactors the United States. But they are functionally essentially the same, and it's just slightly more complicated. So over here we have our fuel core again, and again all it's doing is heating up water with its fission reactions. But this time this water is pressurized. So does anyone know why you would want to pressurize the water? Yeah? AUDIENCE: So it doesn't boil? TA: Yeah, exactly. So when you increase the pressure, you're also increasing the boiling point of the water, and that allows you to function at even higher temperatures than if you're working with a BWR, which gives you more energy efficiency. You guys will learn all about that in 2005, by the way. So yeah. It heats up this pressurized water and this pressurized water goes into a second loop which, again, just heats up water. That turns into steam, that spins a turbine that creates electricity, gets recondensed, et cetera. And that's, again, all that there really is. So one of the upsides of using a PWR is, like I mentioned, the higher efficiency. But also the chance of leaking nuclear material into the violent becomes mitigated. Because you have two separate loops with the nuclear fuel being more isolated from the environment, if there is a breach between the condenser loop and the secondary loop, not a big deal. Nothing really bad happens. You'd have to have breaches in both the loops, which is very unlikely to happen. Yeah. So do you guys have any questions about those two? Yeah? AUDIENCE: What's the standard like operating temperature of these kinds of reactors? TA: I'm not completely sure, but if you Google it you should be able to find it very easily. OK. So this next picture is, again, just to show you that like I make it sound really simple and like a walk in the park, but it's really not. There's a lot going on. So this picture over here is basically just showing that there are a lot of redundancy systems inside these reactors. Like we don't just have one single primary loop and if it fails, it fails. We actually have four at the same time, and this is just called the n minus two redundancy, something like that. OK? So the next kind is something much cooler. It's got a heavy water reactor. Actually it's just a little bit cooler. But the main heavy water reactor that everyone can kind of think of on their minds is CANDU, which is the one that's located in Canada. So the only difference between heavy water reactors and the light water reactors as I mentioned before is that it uses heavy water instead of light water. Does anyone know what heavy water is? AUDIENCE: Deuterium oxide. TA: Yeah, exactly. So it's just deuterium oxide. So remember-- I'm sorry, this might seem inane, but this is water, right? And this is heavy water, where the D is just a hydrogen with two atomic particles instead. So one proton and one neutron. So the reason why they decide to use heavy water instead of light water is because heavy water has a much lower absorption cross-section than light water does. So what this means is that when neutrons are flying around in the reactor there is a chance of it hitting a fission product and a piece of fissionable material and undergoing fission. But there's also a chance that the water that surrounds it will absorb that neutron. So if that neutron gets pulled out of the system you're not able to create any more fissions. This is actually kind of a bad thing because the whole point of nuclear reactors is to create heat and fission. So we don't want those neutrons to be absorbed. You can see, if you look at those statistics, you can see that the absorption cross section of H2 or deuterium is like 0.00052 barns, in comparison to H1, which is 0.332 barns. So I'm bad at math, but I think it's like 600 times less, right? Maybe? Anyway, so you can see why deuterium would be a good option for this. So because it's absorbing less-- because it has a chance of absorbing less neutrons as it undergoes its processes, you're actually able to use a lower enriched uranium, which is really great because that lowers fuel costs. Yeah. But the main downside of this is that, even though you're lowering your fuel costs, deuterium is really expensive. It's about 1,000 or so dollars per kilogram, which is kind of ridiculous because a kilogram of water is really not much at all, you know? So even though you're counteracting the lower fuel costs with higher water cost. Also, because you're using your reactor with lower enriched uranium, you actually have to change out your fuel more often. That fuel gets spent more quickly and I'll describe that in just a second, and therefore you just have to keep replacing it more often than you would for a normal light water reactor. Cool? Questions about this one? Oh, I forgot to mention, but aside from that, everything else with the heavy water reactors and the PWRs, they're the same mechanisms. And finally we're going to move on to breeder reactors. So breeder reactors are a really cool idea, and they were most popular between like the '50s and the '60s-ish in the very beginning of creating nuclear reactors. So what breeder reactors are are, again, they're essentially the same thing as light water reactors I mentioned you guys before. But instead, now there's two little chunks of extra material. So do you guys know what the difference is between fissile, fertile, and fissionable material is? Cool. All right, so all right, let's start with fissile material. So fissile material is basically just the material that is willing to undergo fission with a thermal neutron. OK. So basically when the thermal neutron gets absorbed by this fissionable material, it's going to undergo a fission. Makes a lot of sense, right? So do you guys happen to remember what the energy of a thermal neutron is? You guys calculated this in pset 1. Huh? AUDIENCE: 1 eV? TA: Lower than that. AUDIENCE: 0.025? TA: Yeah, 0.025 eV. Like super low energy. And while we're at it, how do you calculate this? Bozeman constant times T. Cool? Whew. So main examples of fissile material would be U235 and plutonium 239. There's four in total, but those are the two most important ones. OK. So this is the main fuel that is inside a nuclear reactor, but it's not all just U235. Like you guys have heard of-- oh, shoot, what's it called? Enrichment, right? Enrichment is basically the amount of fissile material versus the amount of other fissionable material. So moving onto fissionable material. So fissionable material is just material that is able to undergo fission after the absorption of a more energetic neutron. So that's all it is. So that's all it is. So an example of fissionable material that's inside the other reactors at the same time is U238. So if a U238 absorbs a thermal neutron, it's not going to do much. But if it absorbs a neutron of about like, I would say, like 2 meV, then it's more willing to undergo fission. Cool? And finally we have fertile material. So fertile material is the basis for breeder reactors. But fertile material is just material that absorbs a neutron and then is able to become a piece of fissile material. So for our purposes, the main types of fertile material we use are U238 and thorium 232. So if you look at these processes you can see that U238 absorbs a neutron, becomes U239, undergoes a beta decay to come neptunium and undergoes one more beta decay to become the beautiful plutonium 239. If we start with thorium 232 instead, absorbs a neutron, becomes thorium 233, undergoes a beta decay to become protactinium, becomes uranium 233, which is another fissile material by the way, through a series of beta decays. Cool? So that's what breeder reactors are doing. They're adding extra chunks of uranium 238 and extra chunks of thorium 232 into the reactor. If one of the neutrons-- so imagine-- if you're looking at the little fuel core, there's a bunch of neutrons that are flying around and heat and other isotopes and things like that. So some of the neutrons will go and create other fissions with the material that's hanging out in the red. But other neutrons might escape, and when they escape, instead of going into the water dissipating and never to be seen again or being reflected, they instead create more fissile material. So you can understand why this is a kind of an attractive idea, is that you're creating your own fuel. You're able to work at a higher fuel efficiency because you don't need to add in as much fissile materials as you would for a normal light water reactor. So people were really fascinated with this idea, like I said, in the 50s and 60s. Because back in the day they legitimately thought that we would run out of U235. But luckily in the 60s we discovered that we have a lot more uranium ore than we thought we did. We're probably not going to run out anytime soon. And after that discovery, people were not nearly as interested in breeder reactors. The reason being is that, one, because there's just this extra material that's hanging out, this extra material could be more fissile material that creates more reactions. It's not nearly as power efficient. And it's also slightly more expensive because you're not being able to be power efficient. And it also is better on paper than it ever is in reality. So on paper you're like, oh, this is great, because I can just create more fissile materials. I never need to add more. This is never really truly sustainable. They always have to keep adding more fissile materials, because it's not as perfect as they want it to be. Any questions about these things? Great. Cool beans. And then finally we're going to move on to generation four reactors. So generation four reactors are all the new kind of reactors that people want to build. So the primary objective for these new designs of reactors-- like, the ones I just told you guys about, they're all good and well, but we want to make them better, right? We want to make them cleaner and safer and more cost effective. Keep them robust yet sustainable, and also make them more resistant to people being able to divert materials into creating nuclear weapons. So yeah. Here are the six kinds of generation four reactor types that were deemed to be the most promising. So there's gas-cooled fast reactors, lead-cooled fast reactors, molten salt reactors, sodium-cooled fast reactors, very high temperature gas reactors, and supercritical water-cooled reactors. So I'm going to be honest with you guys. I don't know all that much about these and I don't want to like spew out information that might potentially be false. So if you guys are interested, one, you can talk to other lab members or people in this department. I know they're-- mostly Mike's group, actually. A lot of people in Mike's group are working on molten salt reactors so you guys can go ahead and ask them about that. Or if you're interested you can read more about them with this hyperlink that I included over here. Hopefully he will post the slides online and you guys just click it and there's a awesome source all about these different kinds of reactors. OK? All right. Any questions? Hi. AUDIENCE: Do any of these actually exist, or is it all just theory? TA: I'm pretty sure that they were just kind of proof of concept stage right now. Like there aren't any that are producing electricity in the United States, at least. Cool. Good question. All right. So all the things that we've mentioned before like how great nuclear is and all the cool applications of it and how simple and easy reactors are, why aren't we using more of it? So currently in the US there's only 99 operating reactors that are producing electricity, which makes up about 19% or about 20% of the total electricity output in the United States. The main players are still, you would imagine, coal and natural gas. So this is actually even worse in the rest of the world. In the rest of world, there's only about 440 reactors spread around 30 countries and produces only 14% of the global electricity. So these proportions are pretty low. And you're wondering, like, why aren't we using more nuclear power? What exactly is holding us back? So it turns out that the main things that are holding us back is just social, economic, and therefore like government hesitance to start using nuclear power more often. So the main reason why we're a little bit hesitant to start using more nuclear power is because of safety issues. So nuclear-- none of us can argue that nuclear is like 100% safe. It actually does have some dangers associated with it, which is why it's so important that we're doing what we're doing. But if you guys look at this chart that I showed you guys in like the first or second slide, you'll notice that there are these events listed above. What are these words? Three Mile Island, Chernobyl, Fukushima, what are they? AUDIENCE: Nuclear accidents. TA: Yeah, so they're some of our biggest nuclear accidents that we've experienced in history. And you can see that after a nuclear accident you can see a pretty steep decline in the amount of nuclear reactors that are being commissioned. So this is especially noticeable at Three Mile Island, which is essentially the first nuclear reactor accident that we all had to go through. You can see that, after Three Mile Island, you can see this massive steep decline in the amount of nuclear reactors being commissioned. This is probably causational. We can pretty much assume that. And then you can see that Chernobyl-- once Chernobyl happened, you can see like this also another massive decline. And again Fukushima, once again, with the amount of reactors being commissioned after the accident just declines dramatically. So I'm assuming you guys probably don't know exactly what happened during each of these accidents. Like you probably know that they exist, but like what happened during them? So if you do know, sorry, but if you don't know, you're about to know. So Three Mile Island, which is the first one, it happened in 1979 on March 28. So Three Mile Island reactor is a PWR located in Pennsylvania. So during this time it underwent a core meltdown. The cause of this is just the fact that there was some kind of mechanical or electrical system that prevented coolant water from being pumped into the primary system. So because there wasn't enough water coming to cool up the core, the core began to overheat. So as the temperature of the core rises, the pressure also rises. So they notice this and they're like, oh, shoot, we got to fix that. So luckily there is like a little emergency valve that you can see in this animation gets opened up and pressure gets released. So that's all good and well, but unfortunately after the pressure's released, you should close the valve again and continue operation. But it became stuck. So this valve became stuck and they didn't realize that it became stuck because their equipment and their instrumentation wasn't able to detect that. So they continued to operate again but this valve was open, so there was actually water that was getting leaked out of this primary loop. So because the water was getting leaked, they noticed that, oh, shoot, the pressure is dropping. Well, what do you do when the pressure's dropping? Apparently you have to make sure that there's not too many vibrations that could damage the reactor, so they shut off the coolant pumps. Or they lower the operation of the coolant pumps. So now there's water leaking out so the core is getting hotter, but then they also took out the water that is usually used to cool the reactor core, so again it's also getting hotter. So this combination of events led to a core meltdown. So the core melted down. That's never a good thing, by the way. Yeah. And yeah, so the core melted down, the reactor wasn't able to operate anymore. But luckily at Three Mile Island there was containment that prevented radioactive isotopes from leaving the system. So they actually took a brief survey-- or not a brief survey. That's probably a long, long experience. But they realized that the two million people who are around Three Mile Island at the time, within like a two mile radius or like maybe a 30 mile radius or something like that, they realized that they didn't get much dose at all. They collected about a total of 1 milligram more dose than usual. So to put that in perspective, an x-ray is six milligrams. So really nothing that happened at Three Mile Island other than they had to shut it down and they had to do expensive repairs. But people weren't hurt. The environment wasn't damaged. It wasn't that bad of a situation. I think the effect was bigger in concept than it was in actual damage. Questions about Three Mile Island accident? All righty. The next reactor accident, the big kahuna I like to call it, is Chernobyl. So on April 25, 1986 an RMBK reactor that was located in Ukraine exploded. So what they were doing at Chernobyl during the time of this explosion is that they're actually running, ironically enough, safety tests. They were running the reactor at low power to see how it behaves. So at low power, I don't think they quite realized this, but the coolant pumps in the reactor were also powered by the nuclear reactor being generated. So if they're running this at low power, their coolant pumps weren't getting enough energy to properly cool the fuel core. So that was unfortunate, and they realized that this is a bad thing. So the reactor starts to go supercritical. So when they realize that the reactor was creating a lot more fissions than it should have been creating, they decide to insert the control rods. So thank goodness we have these high absorption control rods to slow things down, right? For some reason, I'm not completely sure why they did this, but RBMKs, they have graphite tipped control rods. So as they lower the control rods into the water, this graphite tip, which doesn't effectively absorb neutrons, it displaced a little bit too much water than was necessary, and that caused the first explosion. So it went super duper critical and caused the first explosion at Chernobyl. Then, for some reason like a couple of minutes later, there's a second explosion. They're not completely sure why the second explosion happened. To this day we can't really pinpoint why. It could have been like building up helium or just a ton of other fission reactions. But there's a second explosion that actually just blew this entire core apart. So that kind of stunk, but it did stop the whole reaction. Because a super critical mass was all blown apart, it was no longer super critical. It was fine. The whole debacle stopped. But unfortunately, there was a lot of radioactive isotopes being spread into the environment. First of all, Chernobyl didn't have the same kind of containment that Three Mile Island had, so these isotopes were just able to go everywhere. And also the second explosion had a lot of steam with it that carried these isotopes even further than they probably should have gone. So if you're looking at the statistics of Chernobyl, it turned out that 28 highly exposed reactor staff and emergency workers die from this radiation or from thermal burns during this time. And officials also believe that there is about 7,000 cases of thyroid cancer that occurred because of Chernobyl. They're pretty sure it was Chernobyl because these are all cases that happened in people who are less than 18 years old. So you guys know that no one really lives near Chernobyl at the moment. It's kind of been deemed unlivable because these radioactive isotopes literally went everywhere in this environment. Like it was in the water, it was in the plants. It's not safe to live there. It's a pretty radioactive environment. Luckily we see that there are animals coming back now now. If you look on NationalGeographic.com there's like little deer roaming around Chernobyl. But it's been about-- how long has it been, like 30, 40 years? People aren't advised to live here still. So Chernobyl was terrible. Questions? Yeah? AUDIENCE: What does it mean for a reactor to go supercritical? TA: Oh, yeah, sorry. So you guys will learn all about criticality in a little bit, but basically when I say supercritical it just means that there's way too many fission reactions happening. Yeah? AUDIENCE: You said it went supercritical because it wasn't being cooled enough or? TA: I think I might have skipped a detail. It wasn't being cooled enough so the water was evaporating and then it became supercritical because there was not enough neutrons being slowed down or absorbed. My bad, I'm sorry. Good? All right. So the next reactor accident that we were alive for, which is cool, was Fukushima Daiichi. So Fukushima Daiichi happened in 2011 on March 11, and Fukushima is in Japan. So these reactors, I think these are pressurized water reactors. Yeah, I think so. So following a major earthquake, the generators that were-- pardon. Yeah, so following a major earthquake, the things that were cooling the core, they broke. I think they're just like power generators on the side that did-- yeah. They broke the cooling pumps. So there wasn't enough water being able to go to the fuel core. This is a very similar problem, as you can see that in all these instances of the reactor incidents, it's just kind of like the fuel core was misbehaving and we weren't able to get enough coolant water to it. So following the earthquake, these coolant pumps broke. They're like, oh, that's OK. What we can do is we have backup generators to continue running the pumps. It'll be all OK. Nothing will happen. We're all good. And then a tsunami hit. So it was a foot tsunami I think-- I think that-- 15 meter tsunami, oh good gosh. So a 15 meter tsunami hit and it broke the generators and then at that point they're like, oh, no. So they had no other redundancy factors to continue pumping cool water into the fuel core. So again, there wasn't enough water in the core, it became supercritical, it began to melt. So the fuel rods began to melt, but this is actually another additional bad thing. So the water was evaporating, creating steam. The fuel rods were coated with zirconium. So what you guys might not know is that when zirconium and steam interact with each other, that's not a good thing. It starts to explode. So as you can see, the reactors at Fukushima Daiichi began to explode. There was radioactive isotopes being spread out all around the country. You guys probably saw the lovely flow charts of the radioactivity flowing out from Japan and making it to California and contaminating your fish and stuff like that. But luckily, no one was directly hurt by burns or radioactive exposure. Cool? All right. Questions about Fukushima? Solid. So aside from these safety issues, these safety issues that happen, they get elevated in the news quite a lot. So these are mainly the things that people who don't really have any background in nuclear energy hear about nuclear energy. They're like, oh shoot, well this thing is going to explode every 20 years. Like, why do we keep using this? Reactor accidents are actually pretty rare. If you think about it, it's been about 60 or 70 years, we have 440 reactors operating around the country. There's three main accidents that have happened. But because these are the things that people get ingrained into their mind-- thank you, news stations-- people think that nuclear reactors are incredibly dangerous. And that's why we have this social hesitance, which is why we aren't able to get enough government funding and which is why there's all these bureaucracy loopholes to jump through, which is why nuclear power isn't more of a thing. Makes sense? Yeah. Another issue that's associated with nuclear power is nuclear waste. So what in the world do we do with it? So first of all, the main thing in nuclear waste is spent fuel. So like I mentioned to you guys, spent fuel rods are made out of uranium oxide. But after undergoing a bunch of fissions, these uranium particles get transformed into other isotopes that aren't fissionable or fertile or even remotely fissile, right? So we eventually have to replace them and add in new rods, and this is a process that happens every 12 or so years. I'm not completely sure on that statistic. But the main issue's like, what do we do with all this material? So this material that comes out is pretty radioactive and it's also incredibly hot, so it can be dangerous if someone decides to come and eat it. So that's why we've got to figure out a way to expose it. So the primary way of disposing of the spent fuel is putting it into spent fuel pools. So spent fuel pools are just giant tanks of water that exist at the reactor. So these tanks of water are mixed with I believe it's boron, which is a neutron absorber. They basically just put the spent fuel rods all the way at the bottom of the pool. So this pool's about like 20 meters high, I think. This is actually a really good solution because the water in the pool, it cools down the reactor rods and also prevents a lot of neutrons from escaping because water is a really great neutron moderator. You guys all know this. It turns out it's actually fairly safe. Apparently you can go swimming on the top of the reactor spent fuel pool and you'll be OK and not be exposed to too much radiation if you want. So yeah. So this is the main solution that people have been using for years, but they realize that this isn't super sustainable, because the amount of space that we have in these spent fuel pools is not infinite. We have way too much spent fuel to be able to just continue to store it in these spent fuel pools. So like shoot, got to find another solution. So the next solution was something called dry cask storage. So dry cask storage is just a way to keep this spent fuel surrounded by an inert gas. And it's held inside a cask, a cask just being probably like a steel drum that's bolted and welded shut, and then there's additional pieces of shielding around it like cement and lead, et cetera. So there's just like gigantic tanks basically that are sitting outside. So they put them outside the reactor. As you can see, it looks like it's sitting in a parking lot outside the reactor. And so this is an OK solution. So basically what they do is they take a spent fuel, let it sit in the pool for about a year or so, maybe two or three years. And then they're able to take it out because at that point it's significantly less radioactive because, you know, you guys know how to calculate this, too. You guys know like the half life of different radioisotopes. You see that the radioactivity declines at a certain point. It's also more cool now so they put them in these tanks, so they let these tanks hang out outside. And this is an OK solution, except for the fact that, again, we just have way too much spent fuel to be able to do this. It turns out that if you were to just keep all the spent fuel that we create in fuel casks, it'd take about 300 acres of land, which is absolutely insane. And obviously no one wants to take up that. Brief little side note, when I was googling like images of dry cask storage and I was looking for the different types, what I found particularly disturbing was that there's only two types listed: vertical storage and horizontal storage. Like there's no other solutions other than these are giant tanks. Anyway, so people realize that we need to figure out yet another way to dispose of the spent fuel, hopefully a way that doesn't get in the way of people's backyards. So the idea was something called deep geological repositories. So deep geologic repositories literally just means that they want to bury the nuclear waste very deep into the ground and never be able to retrieve it again. So the main push for this was-- well, first of all, it's a permanent method of disposal. They hope to put it in the ground and never have to think about it again, so therefore the regions that they choose to bury in the ground have to fulfill a lot of criteria. So this criterion includes not having a lot of seismic activity. Because we are keeping this nuclear waste underground in these casks for like thousands of years, if there is a huge earthquake, those casks break, radiation gets everywhere. That's obviously not a good thing so we want to make sure that doesn't happen. We also have to make sure that there's not a lot of water that leaks through, because the water can carry the radioisotopes and carry them into the environment, which is something else that we don't want to do. A lot of you guys chuckled when you saw Yucca Mountain. So Yucca Mountain is the primary push by the United States to find a deep geological repository somewhere in the United States so we can deal with our spent fuel. So in 2002 the main push for this began. They spent a lot of money. They spent like billions of dollars finding the perfect location to put our spent fuel. They had like nine different locations and they finally narrowed it down to Yucca Mountain. They're like, yes, this is the one, and they started digging down deep into Yucca Mountain and making this happen. But then things weren't as peachy keen as they hoped it would be. So Yucca Mountain is located in Nevada. People in Nevada weren't happy about this. They're like, why are we getting tossed on nuclear waste? We don't even have nuclear reactors in Nevada. This is not fair. There was a lot of opposition. And because of the social opposition there was government opposition and many loopholes we had to jump through, and so it was just becoming a huge disaster. They also realized that it wasn't as geologically sound as they had hoped. There's a lot more groundwater running through and seeping through Yucca Mountain than they thought there would be, so it's actually not as safe as they had hoped. So there's a huge debacle. Basically the costs are rising, nothing much was happening, there's a bunch of different things preventing progression from happening. And then 2011, under the Obama Administration, he just called it quits. There's no more government funding to Yucca Mountain. It's been abandoned, as you can see from this lovely Google picture. It's permanently closed. And you can also see that like 14 people went out of their way to review Yucca Mountain. But we're actually doing OK. It's at like 3.6 stars, just like a normal motel or something like that, so that has been abandoned. This idea has currently been abandoned in the United States. We're kind of still looking for other solutions, but we really don't have it figured out all that well. There is one other kind of way of dealing with nuclear waste, which is repurposing. I personally think nuclear repurposing is the coolest option out there. And basically repurposing just means you take the spent fuel and you chemically separate out any material that could be continued to be used-- any fissile material that could be continued to be used in other reactors. So basically you take the spent fuel-- and it turns out that 96% of a used fuel assembly is recyclable. So you take the spent fuel, you take out what it is useful, you throw away what's not useful, which is also still radioactive waste that has to be put in a fuel pool or something like that. But you have this precious fuel that you can put into another reactor. So this is actually something that France and other places in Europe, and Russia and Japan, they use repurposing quite a lot. For some reason the United States doesn't do it. So the reason being is that this is a really cool idea. It's like recycling. It's like very-- it's very clever. I think I think it's personally one of the cleverer solutions, but the issue is that it's kind of a really expensive process. So repurposing fuel takes a lot of money and it turns out that the act of repurposing fuel actually costs more than just buying a new chunk of uranium 235, which is why we don't do it. It's not economically sound. So yeah. You guys have any questions about anything I've mentioned, about deposition of nuclear waste? Almost done. OK. So all these are issues. Like, we have a lot of nuclear waste to deal with. It is kind of-- there is an inherent danger with using nuclear power. But the real thing that holds us back from just having nuclear power everywhere and creating about 90% of our electricity as we would hope it would is economics. So in this world, money really matters a lot. The economics of nuclear power is actually a really complicated topic and it changes depending on who you talk to. There's a lot of factors that are involved, so you can include certain factors into your calculations like, oh, the cost of building the reactor in the first place or like fuel costs or operating costs or maintenance costs or the amount of money that comes out of damaging the environment. You can weigh all these different factors in, and everyone churns out a different number. But basically everyone you talk to, if you look at this chart, yellow is nuclear power, the gray is coal, and the blue is the natural gas. But basically, anyone you talk to, you can see that nuclear is not nearly as economic of a source of electricity generation as any other of these ones I mentioned. Unless you talk to UK. UK thinks it's OK. But everyone else is saying that it's not as money efficient. So where are all these costs coming from? So the primary costs actually lies with something called capital costs. So capital cost is basically the sunk cost of just building the reactor. Building reactors takes billions of dollars. It also takes tons of time. And because it takes a lot of time, interest rates also jack up that price even further. So basically it's just this massive investment they have to throw in immediately, and this is where most of the issues lie. Like it's really hard to go to an investor and be like, hey, can I have a billion dollars to build this nuclear reactor? It's going to take five years and it's going to take 20 more years for you to get your profit back. How does that sound? No investor is going to be like, yeah, that's a good idea. That's the main reason why we can't get nuclear up and running. We have a lot of plants and we have the possibility to create a lot of plants, but we just don't have the money to do so. Because it's a huge chunk of money, like I mentioned before, it takes a while to get your profit back. And also, if for some reason something happens, you have to stop building your reactor. You just lost a billion dollars. Like, there's no turning back, right? If you look at this chart over here, which is breaking up the cost of nuclear energy per kilowatt hour, I believe-- gigawatt hour? Kilowatt hour. Kilowatt hour. You can see nuclear, coal, and natural gas. So this giant white chunk over here refers to fuel. So if you can look at nuclear power, the majority of cost actually doesn't come from nuclear fuel at all. It's just about $0.01 per kilowatt hour, as compared to natural gas, which the majority of the costs of electricity actually come from the fuel. If you look at operation and maintenance, again it's not that large of a chunk. It's about the same as maintaining a coal power plant. But then if you look at the capital cost, which is the dark gray color, you can see how massive that is in comparison to building natural gas and coal firing plants. So yeah, I think that's the main thing. So because it is more expensive, we can't compete with other forms of electricity. People buy the electricity that's cheapest, not necessarily the electricity that's best for our grandchildren or something like that. Yeah, so that's why nuclear power isn't more of a thing, and that ends my pretty lengthy slide show. So do you guys have any questions about anything I mentioned? If you guys are interested about any of these topics, like if any of these things piqued your interest, I recommend going to NRC.gov. They have a lot of really cool information. Let me write that down, because I talk quickly. That's basically where I got the majority of my information for the slide show, and it is a reliable source. It might just be skewed a little bit pro nuclear, so just keep that in mind. But there's a lot of crazy sources out there on the interwebs. Take them with a grain of salt. Take NRC.gov with less grains of salt than usual. Or if one of these things really piqued your interest, you guys can take 22.04, which is really cool class that's offered here I think this spring, and if not this spring, next spring. But basically it's called nuclear power society. It's taught by a guy named Scott Kemp. He talks about all these things and in a lot of detail and slower. So yeah, cool. So thank you guys so much for coming. I know you guys could have slept an extra hour, but instead you heard me ramble for an hour. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 28_Chernobyl_Trip_Report_by_Jake_Hecla.txt | The following content is provided under a Creative Commons License. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: We've actually got a special guest today. It's Jake Hecla, one of the seniors at NSE who's gone on to Chernobyl for the second time, just returned from there two weeks ago. So if you remember on Tuesday, we went through all of the physics and intuition about why Chernobyl happened. And we left off on what does it look like today. So Jake is going to tell you what does it look like today. JAKE HECLA: All right, so first off I'm actually going to go over a bit of the reactor physics involved with the Chernobyl accident. I realize you guys have already covered this to some extent. But I didn't plan for that. So it's in my presentation. MICHAEL SHORT: It'll be a good review. JAKE HECLA: Yes, also I am a little sick. So I'm probably going to start coughing, apologies. I'm not dying. It's just a cold. AUDIENCE: Radiation poisoning. JAKE HECLA: I have heard that joke about eight times in the last two days. And I'm so done with it. But yes, it's not radiation poisoning. AUDIENCE: [INAUDIBLE]. JAKE HECLA: Yeah, all right, where is Chernobyl? Ah, dang it. Come on, no, go the other way, the other way, yep. OK, there. OK, so one of the first questions I got when I said I'm going to go and visit Chernobyl is wait, isn't that a war zone? Not quite. So the Ukrainian, the war in Ukraine is mostly in this portion over here. It's not entirely under Rebel control in that area. And I say "rebel" in quotation marks because rebel means Russian. However, if you notice those arrows, Russian forces are built up all along that border. So while it's not an active war zone, it's certainly not a place to be spending a large amount of time. That said, Chernobyl is north of Kiev by about, I don't know, let's see, 200, 250 kilometers. So it's not completely out in the sticks, right. Hopefully this gives you good sense of roughly where it is. All right, so what is the Chernobyl nuclear power plant look like? It consists of four finished reactors. There are two unfinished reactors, unit 5 and 6, that are not shown in this image. Units 1 and 2 are located at the right. Those were constructed in the 1970s and early 1980s. All of these reactors or the RBMK type. Units 1 and 2 operated with some success-- I'll go into that later-- for a number of years before the accident that happened in 1986. We also had some call outs up here that show the, some of the incidents that I'll talk about here a little bit later in the presentation. But this just gives you a general idea of the layout. So it's two separate buildings for units 1 and 2. And then units 3 and 4 are in one building, all connected by this turbo generator hall. So this is where the generators that turn the steam from the RBMK into power, r. This is one giant-- well, before the accident, this was one giant, not separated hallway, basically. So you could walk from one end to the other, theoretically. All right, so what is an RBMK? An RBMK is a light water-cooled, graphite-moderated, channel-type reactor. This means that it does not have a giant pressure vessel like you would see in a VVER or an equivalent American light water reactor. Why does that mean anything? Well, building giant pressure vessels is very difficult. If any of you've done research on manufacturing of nuclear reactors, you'll find out that the equipment necessary to construct a reactor pressure vessel is not actually something we even have in the US anymore. Is it Korea that does it for us now? MICHAEL SHORT: Japan Steel Works. JAKE HECLA: Japan that does it now. In the Soviet times, it was very, very difficult for the Soviet Union to produce such pressure vessels at any kind of reasonable rate. So the RBMK got around this by using individual channels that were their own pressure vessel, so to speak. So the way this works is, let's just start on the cold side. You take in cold water, goes here through these things. These are main circulating pumps-- MCPs, as you'll see them referred to later in the presentation-- goes up through the bottom up the core. These are the hot fuel rods. The water goes from liquid to steam phase as it's flowing through the channels, comes out the top, goes to the steam water separators. Steam goes to the turbines, turns the turbines, makes electricity. The important thing to remember here is that we've got a giant graphite core. The graphite is what is doing the moderating in this circumstance. It is not the water. This allows you to run very low-enriched uranium. So you could theoretically run an RBMK on I believe it was 1.2 percent was as low as they could go. But regardless, extremely low-enriched uranium, which is convenient if you don't want to waste a lot of time enriching uranium. The problem with this is that you have a giant core. If you recall the scattering cross-section for graphite, it's pretty small. And the amount of energy lost per collision is likewise also fairly small. So the core on this thing is, let's see, 11, yeah, 11.5 meters across. The core for an equivalent American reactor-- so well, there is no real equivalent to this-- but for, let's say, an AP 1000 reactor of equivalent electrical output, is about four meters across. So the core is huge. As I already discussed, this is what the individual pressure channels look like. So cool water comes in the bottom, goes by the fuel rods, pops out the top. The RBMK had some serious design flaws. So as I said, the core is huge. This allows local power anomalies to form really, really easily. If you look at the core, one portion can be kind of neutronically separated from the others because neutrons just don't make it all that far when diffusing across the core. So you can have very, very high power in one corner and very low power in the other, which is not something that can develop in a physically smaller core, which has a characteristic scale equivalent to that of the neutron being free path. Further, the encore flux monitoring on the RBMK is seriously deficient. So there are a variety of neutron detectors that exist around the periphery of the core. But they're wholly insufficient to catch these local power anomalies. Chernobyl actually found out the hard way on this one. In 1982, unit 1 suffered a quote "localized core melt," not really something that can happen in LWR, really any other type of reactor. But a couple of the fuel channels actually experienced one of these local power anomalies and ended up melting. So if you go into the control room of unit 1, you can see that on the fuel channel cartogram on the wall, there are two of them that are just Sharpied out. And those are the ones that melted. Further, it has a positive void reactivity coefficient. What does that mean? Well, when the water boils in the core, the density of the water there goes down. And the power of the reactor ends up going up because the water is primarily acting not as moderator but as a neutron absorber. This is bad for a whole variety of reasons. And they found out quite catastrophically in 1986 exactly why. Further, the system is extremely unstable at low power. So how did the 1986 accident happen? It was part of this thing called a turbo generator rundown test. The general idea is that if you have an off-site power failure, and your main circulating pumps are no longer have off-site power, you somehow need to keep water flowing through the core, such that the fuel does not melt. The problem is that the backup, large diesel generators, are just that. They're large. They're diesel. And therefore they're very, very slow to come online and come up to full power. The way that you can bridge this gap is by using the energy that you've stored in the turbines to effectively power the main circulating pumps until the diesel generators can come up online. When unit 4 was fully constructed in 1983 and turned on for the first time, they had never actually done this test where they did a turbo generator rundown, despite the fact that it was required by law in the Soviet Union that all new power stations should have this test performed. It was delayed until 1986. And yeah, it was delayed until 1986 is the long story short. The test procedure-- sorry for all the text on this slide-- is basically as follows. So you would ramp the reactor down. So you would bring it from a normal thermal output of up to 2,400 megawatts thermal, down to 600 or 700 megawatts. You'd bring the turbo generators up to full speed. So you'd store as much energy in them as you possibly could, then cut off the steam supply such that now you are just extracting energy from the spinning turbo generator. This would then be used to power the main circulating pumps, each of which took about 40 megawatts. There are eight of them total. I believe six could be used for normal operation. The rundown would take somewhere in the range of 60 to 70 seconds. And hopefully by this time your diesel generators would be turned on, pumping water, and everything would be fine. What happened in the test was decidedly quite different from that. So on April 26, 1986, they attempted to begin this test about six hours behind schedule because there was an incident in another part of Ukraine, in which a coal power plant went offline. So what happened was the authority for the grid in the area ordered that Chernobyl should stay online at full power for an additional six hours. They began the test by bringing power down. But as a result of running for an extra six hours, they'd built up a significant amount of xenon precursors in the core. So when they started turning the power down, the power started going down, and down, and down. And they were unable to arrest its drop. What ended up happening was that the power dropped all the way down to 30 megawatts thermal. And the reactor operators kind of panicked. Their response to this, instead of canceling the test, was to pull out as many control rods as they could get their hands on. They did so. And this managed to rescue the thermal output of the reactor. And it bumped up to around 200 megawatts thermal. At this point, the reactor was in an extremely unstable state. Mind you, almost all of the rods that they could get their hands on were out of the reactor. The only thing keeping reactivity at a reasonable level was all the xenon that was built up in the core. At this point, they began the turbo generator run down test. They shut off steam to the main turbine, or one of the turbines after it was run up to full power, and then attempted to run the main circulating pump. The main circulating pump started drawing down the energy from the spinning turbine. And as a result, it ran slower and slower, meaning that the flow through the core was less and less, more water boiled, going into steam, which increased the reactivity. As a result, the power output of the court went up. It burned out more xenon. And the cycle continued. They noticed a power excursion, about 40 seconds after they began the test and at this point recognize they were in bad territory and hit the scram button. This would jam pretty much all of the available control rods into the core, including some emergency extras, and shut everything down. In most circumstances, this would be a fairly safe move. But in the case of the RBMK it was most certainly not. RBMK control rods have a graphite tip on them. When jammed into the core, they caused a localized power increase because the graphite is a great moderator. And it is displacing water, which is a great absorber. And as a result, after they made it a couple meters into the core, the increased pressure in the core from the power output, which was localized around the tips the control rods, ended up shattering the control rod drive mechanisms. And instead of turning off, the cycle basically just continued, power continued to ramp up over the next couple of seconds. It eventually reached somewhere around 10 to 20 times the maximum rated thermal output of the system. And a massive steam explosion ended up ripping through the facility. It tossed the 2000 ton biological shield on top of the reactor through the roof of the facility. It injected a significant portion of the fuel, as well as the moderator in the core. And it started a massive fire around the facility. Just to give you a good sense of scale, let's see, if I've got the virtual laser pointer. That's a person right here, this little guy. This is the top of the biological shield, Elena shield. And then this is a model, a cutaway model of the Chernobyl reactor facility with the shield, and with the flipped shield that went up through the roof and came back down. So as you can see, it was an utterly massive explosion. So the damage to the reactor was immediately quite catastrophic. Moderator blocks, fuel was spread all around the immediate area. If you look in this photo, it's rather difficult to see. But at the bottom of that column of smoke you can actually see the bottom up the biological shield. Kind of gives you a sense of the scale of the damage to the reactor. After the explosion happened, actually none of the operators believe the reactor breached confinement in any way. They didn't really have an immediate way of seeing what had happened. So they open the door and went to the main turbo generator building to investigate the damage. They believed it was perhaps one or two ruptured fuel channels. As it happened at, I believe the Leningrad, I think it was the Leningrad power station a few years earlier. In the few seconds that they were there, they received fatal doses and died in the hospital in May of 1986. This is a photo from control room of reactor 4, showing a jammed control rod drive at the 6 meter position. So this was probably a rod, let's see, this is probably a rod coming up from the bottom, in that it, seven meters would be all the way out. Zero meters would be all the way in. The initial response to this, despite the fact that the reactor operators were not yet dead, did realize that it was a full breach of containment, was the response was to an accident that was non-nuclear in nature. So when the fire department got a call from the authority at Chernobyl, the message that they received was there's a fire at the reactor complex. As a result, what they showed up with was not equipment suitable for a hazmat situation in any way. That said, there's pretty much nothing that could shield anyone from the extremely high radiation field that one would encounter around the reactor in the immediate aftermath of the accident. But nonetheless, they were extremely vulnerable. It was night when this accident happened. As I mentioned, this happened at 1:23 in the morning. They actually couldn't see the extent of the accident. And they initially believed that it was just a fire on the roof of the turbo generator building. They attempted to fight the fire. And some of them actually succumb to acute radiation poisoning, or acute radiation syndrome, almost immediately. A number of firefighters went up on the roof and just didn't come back. The aftermath of the accident, the cleanup was handled by the Soviet Army. The people that were involved in this were known as the liquidators. They would spend several minutes on the rooftop of the turbo generator building, or up near where the reactor was, the reactor containment building was. And they would receive a, effectively a lifetime dose, which I believe was, I believe their limit was 50 REM. And that would be a couple of minutes up there. Let's see, this photo doesn't show much evidence of it. But I suppose it shows a little bit of evidence of it. If you look around the bottom up the frame, you can actually see a little bit of hazing in kind of a periodic fashion. Let's see if I can get my pointer on it here, here, here, here, and here. That's the gear that moves the film is actually shielding the film from radiation exposure at those points. The radiation dose rate was so high up there that most of the pictures that were taken just didn't turn out whatsoever. A few smarter photographers used a whole lot of lead and were able to capture photos like this. But nonetheless, the dose rates were tremendous. The reactor structure itself was entombed in this thing that we call the sarcophagus. People in Ukraine call it the object shelter, or the shelter object. It was constructed in starting almost immediately after the accident, basically to keep radioactive graphite and fuel fragments from leaving the reactor structure and contaminating any more land. This is a photo from when it was under construction. Basically what it consisted of were steel and concrete walls that were erected around the reactor, using a variety of technologies. They at first attempted to use robots, that were almost immediately rendered useless by the high radiation field. Later on, they ended up using quote "bio robots," people, to move things into place. As I've said before, a whole lot of people died in this accident both immediately and after, many during the construction of the sarcophagus. Actually during the initial firefighting, or yeah, the initial firefighting measures, as the core remained burning for a number of weeks after the accident, they attempted to put it out with bags of sand dropped from helicopters. And during that effort, a helicopter actually ended up hitting one of the cranes that they were attempting to use for this and falling into the reactor, and a good portion of its remains remain entombed within the sarcophagus, from what I understand. All right, so my visit to Chernobyl, why would anyone ever want to go there? The primary focus was to learn about radiological decontamination at the site, basically how is contamination control managed, how do workers stay safe. Mind you, there are 3,000 people that go to work there every day. And what are the strengths? And what are the shortcomings of their radiological program? It was seven days total, four of which were on site. Other days were spent in Pripyat as well as in some classroom training, which I've got great photos of. So this is a slide stolen directly from the PowerPoint that I was sent on day one. But this was organized by three people, Carl Willis, Erik Kambarian, and Ed Geist. I've known Carl for a few years now. He lives in Albuquerque, New Mexico and is a radiation safety officer at a company that, I think they do advanced energy storage technologies. I'm not exactly sure. Erik is a firefighter and specializes in radiation emergency response. And Edward works at the Rand Corporation and does nuclear history and nuclear security research. This is the team this year. So we've got, let's start here. This is Lucas. He does environmental radiation monitoring. That's me after having no sleep because I did a 22.611 p-set the night before. I know. I looked so happy to be there. This is Nathan. Nathan builds organs, as in the musical instrument. He was along because he's always been interested in Chernobyl but didn't really have any experience in the field. This is Stanislaus. Stanislaus was our guide from the Chernobyl authority. He's been working at the plant since 1991 and is a fantastic resource for information. This is Ed, who I believe I talked about earlier. And this is Ryan Pierce. I'm not really sure what he does. This is Iris, who is a friend of Carl's and works in, believe radiation oncology. And then Danell Hogan, who's an educator based in Phoenix, Arizona who works with the DOE. By the way if you're wondering why we're all wearing those absurd robes before we were going to go in to another room and change into basically coveralls, which are easy to decontaminate. One of the activities that we did was real decontamination training. So that is a truck from the Novarka work site, which, by the way, that is me with a Geiger counter. I'm surveying for contamination. We then pressure washed the truck. As it turns out, there's a very specific technique one needs to use for pressure washing when you're dealing with a contaminated object, so as to not blast contaminated dirt up back onto the truck. This was a very interesting experience and also one that was very entertaining for the workers involved, because they don't actually wear that when they're doing decontamination because their respect for safety protocols are, shall we say, a bit different. So they got to see us where the absurd rubber ducky suits while they stood by smoking and laughing at us. I believe you can actually see the corner of that guy's jacket back there. And he's just wearing everyday clothes. We also went on the new safe confinement work site. So I haven't talked about it earlier in this presentation, but I suppose I should have. There is a object called the New Safe Confinement Arch that consists of basically a giant stainless steel structure on rails which is being slid over reactor 4 so as to prevent the spread of any sort of contamination from it. This is known as the New Safe Confinement, or NSC, arch. It's been under construction since 2007. And it just moved for the first time, actually while I was there, so on the 12th of November. It's supposed to last for 100 years. And hopefully in that kind of time span they'll be able to take apart what remains of the reactor. So actually what you see here is the corner of the sarcophagus. And then if you were to pan over this way a little bit, you'd see the right set of tracks for the New Safe Confinement Arch. We also did some classroom training. Admittedly, the classroom training was the most disappointing part of this. The instructors were not particularly interested in showing us really anything other than YouTube videos and other things that would just waste our time. That was the one part of this trip that I did not enjoy. Regardless, we did get to learn a little bit about the various hazmat getups that folks would wear when working on site. As I mentioned we also got to visit Pripyat, which I have more photos of later, as well as the reactor 4 control room, which is inside the sarcophagus, which is quite a treat to visit. The reactor 4 control room is not terribly contaminated as a result of the decontamination efforts. During the accident, the dose rate would have been somewhere in the range of 5 to 10 rem an hour. But today, it's in the range of 10 mrem an hour. This is the New Safe Confinement Arch that I've been talking about. This is actually a photo from 2015 with a clip art Statue of Liberty on it, but to give you an idea of how huge it is, 5 meters taller than the Statue of Liberty. And it's on rails, which is interesting. It's actually too big for wheels. So it's not on rails like with wheels on them. It's on rails with giant Teflon scoots. This is the inside of the turbo generator hallway. Remember that long building that I showed you that connected reactors 1 and 2 and 3 and 4? This is right outside the reactor 3 part of it. There are-- I think I'll show you these photos later. There are chunks of the turbine from reactor 4 that are down here in this area that are quite visibly radioactive and are very easy to detect if one swings a Geiger counter about. Within Pripyat we also visited a hospital 126, which is where the firefighters went immediately after the accident, that is, the ones that made it off the roof. This garment here, we're not exactly sure what it was because none of us were going to really touch it. But we think it might have been part of a cover-- it would go under one's helmet-- was extremely radioactive. It was contaminated with alpha, beta, and gamma, which is fairly unusual. Alpha contamination is fairly rare around the Chernobyl site, and was somewhere around 50 to 75 mR an hour on contact. I think I already showed you photos the control room. Yep, unit 4, that's the cartogram, so that would display various parameters of the reactor for each fuel channel, depending on how one configured it. That's an external photo of the sarcophagus. And I think that's it for the PowerPoint slides. I do have a bunch of photos though that I think you will find interesting. I apologize if it's a little disorganized. This was put together relatively recently because, well, I just got back from Chernobyl. And then I went to a conference. And then I came back here and tried to get work done. Right, so these are in chronological order roughly. I'll go through and hopefully tell you guys a little bit about what the site's like. MICHAEL SHORT: [INAUDIBLE]. JAKE HECLA: OK, so this is on day one. We're driving to the Chernobyl nuclear power plant site. That blue and white bus is pretty much what everyone uses for transport around there. All right, so we're not really supposed to be taking photos in this area. So everything is tilted because it's taking them out the window with the camera like that. That's the New Safe Confinement Arch. It's in considerably better shape than it was last year at this time. They have done a fantastic job of putting it together. It's actually almost a year ahead of schedule. There it is, again. You can see the sarcophagus with the new support wall, which is that right there. All right, so this is our excursion into Pripyat on our second day. So this is the group led by Stanislaus. As you can see, there's not very much left. Just in comparison to what we saw last year, the number of buildings that had been taken apart for scrap metal, illegally, of course, was pretty huge. In, I don't know, 5 or 10 years, it's going to be very difficult to see much of Pripyat at all, frankly. So this is a standard apartment block in Pripyat. As you can see, a lot of broken windows. A lot of bricks have fallen off. These things are pretty dangerous. A lot of tourists do go into them. If one decides to do a tourist expedition to Chernobyl-- which I don't particularly recommend-- don't go in the apartment blocks. This is on the way to one of the schools. This is Lucas who has more detectors than anyone I've ever met. He was wearing 7 at the time. So I had to take a photo of him. This is Iris imitating some of the graffiti, which unfortunately has popped up all over the place. Pripyat itself is really decaying quickly. As I've said, there's a huge problem with looting. In addition, there's a huge problem with graffiti and vandalism. It's really depressing, honestly, to go there and see how much has changed just in a year. So despite my earlier warning, we did go in an apartment block. This is just a measurement showing that the background up there actually is not terribly high. Yeah, that's Iris, not particularly safety conscious at times. This gives you a good idea of how far away Pripyat is from the reactor. That is not very far, about two kilometers. So you can see the New Safe Confinement Arch to the top left of the detector. Background there is about, in this particular apartment block, at this particular time, was about four to five times what you would see in downtown Cambridge. There are wild animals in Pripyat and the rest of the exclusion zone. This is a huge problem. So despite the fact that the cats are very cute and the puppies are very cute, they also have rabies, not all of them, but a very large number of them. In 2009, five workers were injured by, I kid you not, a rabid wolf. There's a YouTube video of this you can look up on your own time if you so wish. This is because Ukraine doesn't have a lot of money. So they have not been able to continue with their vaccination program. They actually use baits that have a rabies vaccine in them to normally suppress rabies in wild animal populations. But Ukraine doesn't have any money. They killed the program about five years ago. And as a result there's a huge, huge problem with, especially rabid foxes. Because everyone thinks foxes are cute, especially tourists. And foxes, when they get rabies, some of them go through a stage in which they appear to be very friendly. As far as I know, no one has gotten rabies from a rabid animal at Chernobyl. But it's certainly a possibility. So Stanislaus was being a very bad example by feeding one of the wild cats. So that's why I took a picture of it. This is one of the many memorials that you'll find in downtown Slavutych. We stayed in the city that was built, effectively, as a replacement for Pripyat. It's actually a fantastic town. I really enjoy Slavutych. And as one might expect, there are memorials everywhere because the entire population is basically the folks that were removed from the town of Pripyat. This is the train we would take every day. Slavutych is separated from Pripyat by a little isthmus of Belarus that drops down. So that's bad, because you can't get a visa to Belarus. It's not really a thing you can do as an American. I mean, you can apply for one. You'll just never hear back. Belarus is Europe's last dictatorship. And it's not some place one wants to go for any reason. So when we would get on this train, the doors would shut. We would go through Belarus and we would all pray that didn't break down because then we would have to spend some time to Belarus in prison. But yeah, this is the train yard, bright and early. The various zones on the reactor site for cleanliness, so to speak, radiological cleanliness are separated by these benches effectively, that you have to step over, so that it reminds you that, hey, this is the clean area. You need to be wearing boot covers and at least these garments in order to go here. Sideways, for some reason, this is part of the-- all right, I don't know why these are all sideways. But regardless, you get the picture. If you notice on the top of the screen, which should be the left of the screen, let's see if I can rotate it. That's a giant puddle of water. This place is falling apart. Despite the fact that they have money from the European bank on reconstruction and development for the New Safe Confinement Arch, the Chernobyl site itself does not have a lot of money. And as a result things are falling apart. And the amount of contamination that is getting into places where it very much shouldn't be, like this quote "clean area," is fairly high. That puddle of water was pretty toasty, something like 5 to 8 mR an hour on contact. That's generally quite bad. As I said, water is coming in everywhere. And in this case they were using leg covers to catch the water. Another one of the hallways that had water leak into it and therefore all the lights are out. That's the footwear which we were issued, which breaks after walking about a kilometer, which is not particularly encouraging if one wants to take their boots back. Again, walking down the hallway, you notice this gold corrugated material that you see on the sides? It's aluminum that is anodized. And it's placed there because it covers up all of the sheets of lead that were affixed to the wall. What happened is in the aftermath of the accident, the entire facility was just hopelessly contaminated. And you can scrub all you want, but ultimately it's very difficult to get radioactive contamination off of things. So what they ended up doing was getting it down to a somewhat acceptable level, and then fastening sheets of lead over it, and then fastening this stuff over the top of that. This is unit 2's control room. So this is what a fully fleshed-out control room looks like. Unit 2 were shut down in 2000. The reactors actually continued operating after the 1986 Chernobyl accident because Ukraine was in such desperate need of power. As a result, the fuel is still fairly hot. It's producing a reasonable amount of decay heat. And there is a crew that sits in the control room at all times monitoring it. That's Nathan. This is actually, I took a picture of this because it's a very good diagram of the Chernobyl reactor that's simplified. It shows the core and the relative locations of these steam, water separators. OK, as I said, there is a team that stays in there. So there are people that work on site and work in the reactor control rooms, which I have to imagine has to be a bit of a surreal job. This is inside the main, inside the turbo generator hall. Those chunks that you see here are from the turbo generator of reactor 4. So they're quite contaminated and quite easy to detect. Actually, there is a good story behind this. So we were trying to figure out exactly what was making the dose rate so high in the area when we were up there. So we got a group of us to stand in a circle, minus one person. So there's a gap. We got a person in the center with a scintillator. And we all kind of rotated around until we found in which direction the scintillator reading was high enough, so basically made like a 2 pi meat shield. It worked fairly well. It thoroughly baffled all of the guides that were with us. They were like, what are you doing linking arms and spinning around. Regardless, that is a good way to find sources if you're in a pinch. This is looking the other direction from that same vantage point as in the last photo. Behind those walls with a little radiation signs on them are chunks of the ventilation stack, which is fairly iconic. They've been fairly well decontaminated. At that fence area, the dose rate or more accurately exposure rate, was 10 mR an hour. And yeah, that's another close up photo of it. And I managed to sneak by phone over the top of it and get a good shot. Unfortunately, none of the pieces are uncovered. I would really like to see the orange and white of the ventilation stack. But I did not. Again, same shot, slightly different shot of the turbo generator hallway looking in the unit 1 2 direction. More detritus, oh here's a slightly better close up of those components. One of the interesting things I found out about the facility is way that access is controlled. So instead of having an RFID card or something like that, they've got cameras and operators. So what you see here is a camera. So Stanislaus would scan a badge that would automatically call someone who is an operator. Stanislaus would say, hey, I'm at this door. I want to go into this location. Will you let me in? And then they would look at the camera, determine that yes, that is Stanislaus. He does want to go into this area. And then they would approve it and let him through. Walking through the corridors of the sarcophagus. You can actually see up here those lead sheets I was talking about. I don't know how thick they are on there, or how close they are to falling off, for that matter. But I'm sure several thousand pounds of lead is right there alone. These are main circulating pumps, about one-half of the main circulating pumps for reactor 2. And each one of these things takes something around, something around 40 megawatts to actually operate. These are aligned differently and are of a different type than the ones used in reactor 4 because reactors 1 and 2 were of an earlier design. Ironically enough, reactors 1 and 2 actually don't have all of the safety measures that reactor 4 does, which is a bit terrifying to think of. Yet more photos, right, as I said, dog problem at Chernobyl. This is right outside the entrance to a clean facility. And occasionally these dogs would wander in. Unfortunately dogs are large furry piles of easily airborne contamination. So they would walk in. People would go to shoo them out. They would shake their coats or whatever. And then clean up on aisle three, because now there's contamination everywhere. More sad looking puppies. New Safe Confinement, yet again, to give you an idea of scale, let's play find the workers. Those are workers right there. Can you guys see them? AUDIENCE: Barely. JAKE HECLA: Yeah, they're really, really small next to this facility. AUDIENCE: [INAUDIBLE]. JAKE HECLA: Yeah, I think I've got a slightly better shot here. That's a guy right there. There's also another dude right here. Yeah, this place is, or this structure is absolutely enormous. It's really hard to wrap your head around exactly how large it is. So this is in an area called the local zone. So it's the immediate several hundred meters surrounding the reactor. As you can see, the hazmat equipment that we're wearing there is significantly different from what we would wear inside the reactor or inside the sarcophagus, mostly because the threat from dust in this area is pretty huge. As you can see, we're walking on fill. It's actually meters and meters of fill because the ground was so contaminated that they scraped it away, put fill in there, put fill on top of that because just the residual contamination was enough to make it hazardous to use as a work site. Though they're not shown in this image, or I believe any images here, there are little concrete and lead structures that these workers take breaks behind because you have a dose limit that is enforced while you're working there. And if you're going to take a smoke break, as a huge fraction of the population of Ukraine smokes, or you're going to take a break of another sort, they don't want you racking up dose during that time. So you basically hide in a little concrete shack for a while with a few inches of lead between you and the reactor. Yet another shot inside the sarcophagus, Ed explaining something about which I'm not sure. As you can see these places are not exactly in the best condition on the inside. And one thing that did concern me a lot was the amount of dust that was very, very easy to kick up in the area. This is inside control room of reactor 4. Selfie, which I didn't mean to have in this album. Control room 4 hasn't changed a whole lot since last year. But there is a dividing wall that actually separates reactors 3 and 4 that's being put together. And it cuts right through the edge of the fourth control room. And for a while, we actually didn't know whether or not we were going to be able to visit it at all because of ongoing construction. I'm very glad that we were able to. Most of the instruments have been removed. It's unclear as to why. We've been told that some of it was because of contamination. But the pattern doesn't really make sense. This is the reactor control room cartogram, excuse me, reactor core cartogram, which, as I said was lit and could display various parameters regarding the various fuel channels. There is only two control rod indicator, well, yeah. There are really only these control rod indicators left. And we believe that actually some of these might not be original. Someone might have stolen the real one and put another one back. I don't have any evidence to support it. But I suspect that there's significant looting that happens in here. This is a rather entertaining photo. That means smoking area. That's in the control room. You shouldn't be smoking. You shouldn't take your mask off for any reason. That's a high gamma radiation warning sign. It was in fact, not that high of a dose rate, somewhere in the range of 30 mR an hour. We also explored a little bit outside of the more formal part of the reactor premises, namely we went to this place called Buriakivka 2, which is a burial facility for waste from the reactor, not waste as in nuclear waste but waste is in chunks of metal and other things that are contaminated and therefore removed when the New Safe Confinement Arch was being built, or when, let's say, they were building the separation wall between reactors 3 and 4. And there were some incredibly hot spots. So I might, I think I have some more photos later. But just under that little triangle, the dose rate was 150 mR an hour, so 0.15 rem an hour. It's extremely high. And that was just in a field basically, not controlled, not patrolled, no warning signs. Getting dressed up, lots of fun. That was the truck we were sent to decontaminate. Honestly it wasn't particularly contaminated in the first place. They weren't going to give us real fun things to play with. That's all of us. And then as you notice, the real workers here are not wearing even a fraction of what we are. More decontamination, yeah, see, barely anything. These are chunks of metal that have come out of the reactor. We're not exactly sure what part, what parts they are. No one was really able to answer our questions about them. But they were also rather contaminated, somewhere in the range of 50 to 75 mR an hour on contact in some spots. More photos of Chernobyl, or of Chernobyl from Pripyat. MICHAEL SHORT: Dr. Jake, I want to take a quick break and ask folks if they have any questions on what the experience was like [INAUDIBLE].. JAKE HECLA: Yes? AUDIENCE: Did you suffer any adverse health effects or anything? JAKE HECLA: Only the cold I picked up on the way back. The total dose that I received on this entire expedition, minus the flights there and back, was 0.6 millisieverts. So effectively, nothing. MICHAEL SHORT: [INAUDIBLE]. JAKE HECLA: Well, all of the high radiation areas that we were in, we were encouraged to walk quickly, is basically what it comes down to. The time portion of time, distance, and shielding was emphasized. Further questions? Yeah. AUDIENCE: Are there [INAUDIBLE] radiation area versus [INAUDIBLE]. Do they use the same levels [INAUDIBLE]?? JAKE HECLA: No, radiological control in Ukraine is a totally different game than it is in the US. There are the same types of controls that exist in the US just don't exist at that site. For areas that are immediately dangerous to your health, you know, 10 rad an hour, something like that, from what I understand that there are locked doors that prevent one from accessing those accidentally. And there are warning signs in a variety of locations. But I don't think that there is a the same standardization of 5 mR an hours is a radiation area, et cetera, et cetera. Yeah. Further? MICHAEL SHORT: And are folks still going to be running these tours pretty continuously? JAKE HECLA: No. You won't be able to see the sarcophagus itself because it will be contained within the New Safe Confinement Arch pretty much now. It's 75, right, let's see. Last I checked, the New Safe Confinement Arch was 75 percent of the way over the reactor itself. Regular tourist visits to Pripyat will continue to happen. This program that I went on is something very special. Carl, Ed, and Erik have done this type of thing once before. That was the trip I went on last year. And they intend on doing it once a year as long as they can. But that's pretty much your only opportunity to get that kind of access to the reactor. It takes a lot of work. MICHAEL SHORT: Anyone else have questions for Jake? It's rare to meet someone that's actually gone to Chernobyl to [INAUDIBLE]. Yeah? AUDIENCE: Do you think it's haunted? JAKE HECLA: No. There's a rather haunting location, though, the Khodemchuk Memorial. So when the accident happened there was one guy who was-- depending on how you look at it, either lucky or unlucky-- in that he wasn't killed by radiation poisoning. He was killed by being flattened in the explosion. And his remains are within the reactor and within the sarcophagus, never really recovered him. Better than dying of radiation poisoning. But nonetheless, not a fantastic way to go. The memorial that is within the sarcophagus is pretty interesting to visit, and rather somber. Makes you reflect a little bit on the enormous human toll that the accident had. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 26_Chernobyl_How_It_Happened.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: All right. So like I told you guys, Friday marked the end of the hardest part of the course. And Monday marked the end of the hardest Pset. So because the rest of your classes are going full throttle, this one's going to wind down a little bit. So today, I'd say, sit back, relax, and enjoy a nuclear catastrophe because we are going to explain what happened at Chernobyl now that you've got the physics and intuitive background to understand the actual sequence of events. To kick it off, I want to show you guys some actual footage of the Chernobyl reactor as it was burning. So this is the part that most folks know about. [VIDEO PLAYBACK] - [NON-ENGLISH SPEECH] MICHAEL SHORT: This is footage taken from a helicopter from folks that were either surveying or dropping materials onto the reactor. - [NON-ENGLISH SPEECH] MICHAEL SHORT: That was probably a bad idea. "Hold where the smoke is." We'll get into what the smoke was. - [NON-ENGLISH SPEECH] [END PLAYBACK] MICHAEL SHORT: So that red stuff right there, that's actually glowing graphite amongst other materials from the graphite fire that resulted from the RBMK reactor burning after the Chernobyl accident, caused by both flaws in the physical design of the RBMK reactor and absolute operator of stupidity and neglect of any sort of safety systems or safety culture. We're lucky to live here in the US where our worst accident at Three Mile Island was not actually really that much of an accident. There was a partial meltdown. There was not that much of a release of radio nuclides into the atmosphere because we do things like build containments on our reactors. If you think of what a typical reactor looks like, like if you consider the MIT reactor as a scaled-down version of a normal reactor-- let's say you have a commercial power reactor. You've got the core here. You've got a bunch of shielding around it. And you've got a dome that's rather thick that comprises the containment. That would be the core. This would be some shielding. So this is what you find in US and most other reactors. For the RBMK reactors, there was no containment because it was thought that nothing could happen. And boy, were they wrong. So I want to walk you guys through a chronology of what actually happened at that the Chernobyl reactor, which you guys can read on the NEA, or Nuclear Energy Agency, website, the same place that you find JANIS. And we're going to refer to a lot of the JANIS cross sections to explain why these sorts of events happened. So the whole point of what happened at Chernobyl was it was desire to see if you could use the spinning down turbine after you shut down the reactor to power the emergency systems at the reactor. This would be following something, what's called a loss of off-site power. If the off-site power or the grid was disconnected from the reactor, the reactor automatically shuts down. But the turbine, like I showed you a couple weeks ago, is this enormous spinning hulk of metal and machinery that coasts down over a long period of, let's say, hours. And as it's spinning, the generator coils are still spinning and still producing electricity, or they could be. So it was desire to find out, can we use the spinning down turbine to power the emergency equipment if we lose off-site power? So they had to simulate this event. So what they actually decided to do is coast down the reactor to a moderate power level or very low power and see what comes out of the turbine itself, or out of the generator rather. Now, there were a lot of flaws in the RBMK design. And I'd like to bring it up here so we can talk about what it looks like and what was wrong with it. So the RBMK is unlike any of the United States light water reactors that you may have seen before. Many of the components are the same. There's still a light water reactor coolant loop where water flows around fuel rods, goes into a steam separator, better known as a big heat exchanger. And the steam drives a turbine, which produces energy. And then this coolant pump keeps it going. And then the water circulates. What makes it different, though, is that each of these fuel rods was inside its own pressure tube. So the coolant was pressurized. And out here, this stuff right here was the moderator composed of graphite. Unlike light water reactors in the US, the coolant was not the only moderator in the reactor. Graphite also existed, which meant that, if the water went away, which would normally shut down a light water reactor from lack of moderation, graphite was still there to slow the neutrons down into the high-fission cross-section area. And I'd like to pull up JANIS and show you what I mean with the uranium cross section. So let's go again to uranium-235 and pull up its fission cross section. Let's see fission. I can make it a little thicker too. So again, the goal of the moderator is to take neutrons from high energies like 1 to 10 MeV where the fission cross section is relatively low and slow them down into this region where fission is, let's say, 1,000 times more likely. And in a light water reactor in the US, if the coolant goes away, so does the moderation. And there's nothing left to slow those neutrons down to make fission more likely. In the RBMK, that's not the case. The graphite is still there. The graphite is cooled by a helium-nitrogen mixture because the neutron interactions in the graphite that's slowing down-- we've always talked about what happens from the point of view of the neutron. But what about the point of view of the other material? Any energy lost by the neutrons is gained by the moderating material. So the graphite gets really hot. And you have to flow some non-oxygen-containing gas mixture like helium and nitrogen, which is pretty inert, to keep that graphite cool. And then in between the graphite moderator were control rods, about 200 of them or so, 30 of which were required to be down in the reactor at any given time in order to control power. And that was a design rule. That was broken during the actual experiment. And then on top of here, on top of this biological shield, you could walk on top of it. So the tops of those pressure tubes, despite being about 350 kilo chunks of concrete, you could walk on top of them. That's pretty cool, kind of scary too. So what happened in chronological order was, around midnight, the decision was made to undergo this test and start spinning down the turbine. But the grid operator came back and said, no, you can't just cut the reactor power to nothing. You have to maintain at a rather high power for a while, about 500 megawatts electric or half the rated power of the reactor. And what that had the effect of doing is continuing to create fission products, including xenon-135. We haven't mentioned this one yet. You'll talk about it quite a lot in 22.05 in neutron physics. Black shirt really shows chalk well. What xenon-135 does is it just sits there. It's a noble gas. It has a half-life of a few days. So it decays on the slow side for as fission products go. But it also absorbs lots and lots and lots of neutrons. Let's see if I could find which one is the xenon one. There we go. So here, I've plotted the total cross-section for xenon-135 and the absorption cross-section. And notice how, for low energies, pretty much the entire cross section of xenon is made up of absorption. Did you guys in your homework see anything that reached about 10 million barns? No. Xenon-135 is one of the best neutron absorbers there is. And reactors produce it constantly. So as they're operating, you build up xenon-135 that you have to account for in your sigma absorption cross section. Because like you guys saw in the homework, if you want to write what's the sigma absorption cross section of the reactor, it's the sum of every single isotope in the reactor of its number density times its absorption cross section. And so that would include everything for water and let's say the uranium and the xenon that you're building up. When the reactor starts up, the number density of xenon is 0 because you don't have anything to have produced it. When you start operating, you'll reach the xenon equilibrium level where it will build to a certain level that will counteract the reactivity of the reactor. And then your k-effective expression, where it sources over absorption plus leakage, this has the effect of raising sigma absorption and lowering k effective. The trick is it doesn't last for very long. It built decays with a half-life of about five days. And when you try and raise the reactor power, you will also start to burn it out. So if you're operating at a fairly low power level, you'll both be decaying and burning xenon without really knowing what's going on. And that's exactly what happened here. So an hour or so later-- let me pull up the chronology again. A little more than an hour later, so the reactor power stabilized at something like 30 megawatts. And they were like, what is going on? Why is that reactor power so low? We need to increase the reactor power. So what did they do? A couple of things. One was remove all but six or seven of the control rods going way outside the spec of the design because 30 were needed to actually maintain the reactor at a stable power. All the while, the xenon that had been building up is still there keeping the reactor from going critical. It's what was the main reason that the reactor didn't even have very much power. But it was also burning out at the same time. So all the while-- let's say if we were to show a graph of two things, time, xenon inventory, and as a solid line and let's say control rod worth as a dotted line. The xenon inventory at full power would have been at some level. And then it would start to decay and burn out. While at the same time, the control rod worth, as you remove control rods from the reactor-- every time you remove one, you lose some control rod worth, would continue to diminish leading to the point where bad stuff is going to happen. Let me make sure I didn't lose my place. So at any rate, as they started pulling the control rods out, a couple of interesting quirks happened in terms of feedback. So let's look back at this design. Like any reactor, this reactor had what's called a negative fuel temperature coefficient. What that means is that, when you heat up the fuel, two things happen. One, the cross section for anything, absorption or fission, would go up. But the number density would also go down. As the atoms physically spaced out in the fuel, their number density would go down, lowering the macroscopic cross section for fission. And that's arguably a good thing. The problem is, at below about 20% power, of the reactor had what's called a positive void coefficient, which meant that, if you boil the coolant, you increase the reactor power. Because the other thing that-- I think I mentioned this once. And you calculated in the homework the absorption cross section of hydrogen is not 0. It's small, but fairly significant. Let's actually take a look at it. We can always see this in JANIS. Go back down to hydrogen, hydrogen-1. Then we look at the absorption cross section. And of course, it started us with the linear scale. Let's go logarithmic. Oh! OK! So at low energy, at 10 to the minus 8 to 10 to the minus 7, it's around a barn. Not super high, but absolutely not negligible, which meant that part of the normal functionality of the RBMK depended on the absorption of the water to help absorb some of those neutrons. With that water gone, there was less absorption. But there was still a ton of moderation in this graphite moderator. So they still could get slow. But then there'd be more of them. And that would cause the power to increase. And then that caused more of the coolant to boil, which would cause less absorption, which would cause the power to increase. Yeah, Charlie? AUDIENCE: So did they remove the water from the reactor? MICHAEL SHORT: They did not remove the water from the reactor. However, as the power started to rise, some of the water started to boil. And so you can still have, let's say, steam flowing through and still remove some of the heat. However, you don't have that dense or water to act as an absorber. And that's what really undid this reactor. In addition, they decided to disable the ECCS, or the Emergency Core Cooling System, which you're just not supposed to do. So they shut down a bunch of these systems to see if you could power the other ones from the spinning down turbine. And then, as they noticed that the reactor was getting less and less stable, they had almost all the rods out. Some of these pressure tubes started to bump and jump. These 350-kilogram pressure tube caps were just rattling. I mean, imagine something that weighs 900 pounds or so rattling around. And there's a few hundred of them. So there was someone in the control room that said, the caps are rattling. What the heck? And didn't quite make it down the spiral staircase because, about 10 seconds later, everything went wrong. And so I want to pull up this actual timeline so you can see it splits from minutes to seconds. Because the speed at which this stuff started to go wrong was pretty striking. So for example, the control rods raised at 1:19 in the morning. Two minutes later, when the power starts to become unstable, the caps on the fuel channels-- which, again, are like 350-kilogram blocks-- start jumping in their sockets. And a lot of that was-- we go back to the RBMK reactor. As the coolant started to boil here, well, that boiling force actually creates huge pressure instabilities, which would cause the pressure tubes to jump up and down, eventually rupturing almost every single one of them with enough force to shoot these 350-kilogram caps. And what did they say? I like the language that they used-- jumping in their sockets. So 50 seconds later, pressure fails in the steam drums, which means there's been some sort of containment leak. So all the while, the coolant was boiling. The absorption was going down. The power was going up. Repeat, repeat, repeat. And the power jumped to about 100 times the rated power in something like four seconds. So it was normally 1,000-megawatt electric reactor, which is about 3,200 megawatts thermal. It was producing nearly half a terawatt of thermal power for a very short amount of time until it exploded. Now, it's interesting. A lot of folks call Chernobyl a nuclear explosion. That's actually a misnomer. A nuclear explosion would be a nuclear weapon, something set off by an enormous chain reaction principally heated by fission or fusion. That's not actually what happened at Chernobyl, nor at Fukushima, nor was that the worry at Three Mile Island. Not to say it wasn't a horrible thing, but it wasn't an actual nuclear explosion. At first, what happened was a pressure explosion. So there was an enormous release of steam as the power built up to 100 times normal operating power. The steam force was so large that it actually blew the reactor lid up off of the thing. And I think I have a picture of that somewhere here too. It should be further down. Yeah, to give you a little sense of scale. The reactor cover, which weighed about 1,000 tons, launched into the air and landed above the reactor sending most of the reactor components up to a kilometer up in the air. Four seconds later, that was followed by a hydrogen explosion. Let me get that down to that chronology. So yeah. At 1:23 and 40 seconds in the morning-- oh, yeah. So I should mentioned why this happened-- emergency insertion of all the control rods. The last part that this diagram doesn't mention is these control rods-- and I'll draw this up here-- we're tipped with about six inches of graphite. So if these were two graphite channels-- let's say these are carbon-- and this is your control rod, the goal was to get this control rod all the way into the reactor. One part they didn't mention was they were tipped with about six inches of graphite, which only functions as additional moderator. Graphite is one of the lowest absorbing materials in the periodic table, second, I think, only to oxygen. And if we pull up graphite cross sections, I've plotted here the total cross section, the elastic scattering cross section. And down here, in the 0.001 barn level, is the absorption cross section, about 1,000 times lower than water. So you're shoving more material in the reactor that slows down neutrons even more, bringing them into the high-fission region without absorbing anything. And they jammed about halfway down, about 2 and 1/2 feet down, leaving the extra graphite right in the center of the core where it could do the most damage. And it didn't take that much time. Yeah? AUDIENCE: So my understanding is that, also, one of the designs is that the control rods didn't immediately drop down. But they were slowly lowered. MICHAEL SHORT: Yep. They took 7 to 10 seconds. AUDIENCE: If they had a system where they did drop, would that have possibly actually set the system down properly? MICHAEL SHORT: I'm not sure. I don't know whether lowering control rods into something that was undergoing steam explosions would have actually helped. I mean, to me, by this point, it was all over. So the extra moderator that was dumped in was the last kick in the pants this thing needed to go absolutely insane. And if we go back to the timeline on the second level, control rods inserted at 1:23 and 40 seconds. Explosion, four seconds later, to 120 times full power, getting towards a terawatt or so. One second later, the 1,000-ton lid launches off from the first explosion. Very shortly after that, second explosion. And that happened because of this reaction. Well, just about anything corroding with water will make pretty much anything oxide plus hydrogen, the same chemical explosion that was the undoing of Fukushima and was the worry at Three Mile Island that there was a hydrogen bubble building because of corrosion reactions with whatever happened to be in the core. This happens with zirconium pretty vigorously. But it happens with other materials too. If you oxidize something with water, you leave behind the hydrogen. And the hydrogen, in a very wide range of concentrations in the air, is explosive. We're actually not allowed to use hydrogen at about 4% in any of the labs here because that reaches the flammability or explosive limit. So for my PhD, we were doing these experiments corroding materials in liquid lead. And we wanted to dump in pure hydrogen to see what happens when there's no oxygen. We were told, absolutely not. We had to drill a hole in the side of the walls that the hydrogen would vent outside and do some calculations to show if the entire bottle of hydrogen emptied into the lab at once, which it could do if the cap of the bottle breaks off, it would not reach 4% concentration. So hydrogen explosions are pretty powerful things. You guys ever seen people making water from scratch? Mix hydrogen and oxygen in a bottle and light a match? We've got a video of it circulating somewhere around here because for RTC, for the Reactor Technology Course, I do this in front of a bunch of CEOs and watch them jump out of their chairs to teach basic chemical reactions. But it's pretty loud. About enough hydrogen and oxygen to just fill this cup or fill a half-liter water bottle makes a bang that gets your ears ringing. Not quite bleeding, but close enough. So that's what happened here, except at a much more massive scale. So there was a steam explosion followed seconds later by a hydrogen explosion from hydrogen liberated from the corrosion reaction of everything with the water that was already there. And that's when this happened. [VIDEO PLAYBACK] - [NON-ENGLISH SPEECH] MICHAEL SHORT: So that smoke right there is from a graphite fire, not normal smoke. - [NON-ENGLISH SPEECH] MICHAEL SHORT: Yeah. Spoke too soon. - [NON-ENGLISH SPEECH] [END PLAYBACK] MICHAEL SHORT: This actually provides a perfect conduit to transition from the second to the third parts of this course. A lot of you have been waiting to find out what are the units of dose and what are the biological and chemical effects of radiation. Well, this is where you get them. From neutron physics, you can understand why Chernobyl went wrong. Honestly, you've just been doing this for three or four weeks. But with your knowledge of cross sections, reactor feedback, and criticality, you can start to understand why Chernobyl was flawed in its design. And what we're going to teach you in the rest of the course is what happens next, what happens when radio nuclides are absorbed by animals of the human body, and what was the main fallout, let's say, in the colloquial sense and the actual sense from the Chernobyl reactor. [VIDEO PLAYBACK] Let's look a bit at what they did next though. - [NON-ENGLISH SPEECH] MICHAEL SHORT: That's not quite true. You'll see why. - [NON-ENGLISH SPEECH] MICHAEL SHORT: That actually did happen. - [NON-ENGLISH SPEECH] [END PLAYBACK] MICHAEL SHORT: I think that pretty much summarizes the state of things now. They built a sarcophagus around this reactor, a gigantic tomb, which, according to some reports, is not that structurally sound and is in danger of partial collapse. So yeah, more difficult efforts are ahead. But let's now talk about what happened next. I'm going to jump to the very end of this. The actual way that the accident was noticed was the spread of the radioactive cloud to not-so-close-by Sweden. So it was noticed that folks entering a reactor in Sweden had contaminants on them, which they thought was coming from their own reactor. Good first assumption. When it was determined that nothing was amiss at the reactor in Sweden, folks started to analyze wind patterns and find out what happened. And then it was clear that the USSR had tried to cover up the Chernobyl accident. But you can't cover up fallout. And it eventually spread pretty wide, covering most of Europe and Russia and surprisingly not Spain, lucky them for the wind patterns that day, or those few days. So what happened is a few days after the actual accident, a graphite fire started to break out. Because graphite, when exposed to air, well, you can do the chemistry. Add graphite plus oxygen, you start making carbon dioxide. So graphite burns when it's hot. And as you can see from the video-- where is that nice still of burning graphite? Yeah. That graphite was pretty hot. So a lot of that smoke included burning graphite and a lot of the materials from the reactor itself. Now, when you build up fission products in a reactor and they get volatilized like this, the ones that tend to get out first would be things like the noble gases. So the whole xenon inventory of the reactor was released. It's estimated at about 100%. And I can actually pull up those figures. When we talk about how much of which radionuclide was released. That's also a typo. If somebody wants to call in, there's no 33 isotope of xenon. It's supposed to be 133. That would be interesting if someone wants to call in and say the NEA has got a mistake. So 100% of the inventory released. That should be pretty obvious because it's a noble gas. And it just kind of floats away. The real dangers, though, came from iodine-131, about 50% of a 3-exabecquerel activity. So we're talking like megacuries. It might be giga. I can't do that math in my head. A lot of radiation. The problem with that is iodine behaves just like any other halogen. It forms salts. It's rather volatile. Have any of you guys played with iodine before? No one does-- oh, you have. OK. What happens when you play with it? AUDIENCE: I mean, just throw some stuff-- like, it turns everything yellow and it just reacts with acids and stuff. I haven't really done very much with it. So-- MICHAEL SHORT: OK. I happen to have extensive practice playing with iodine in my home because I did all the stuff you're not supposed to do as a kid, kind of build your own chemistry stuff things that somehow leak out to your local high school somehow. Iodine's pretty neat. Yeah, it happens sometimes. If you put iodine in your hand, it actually sublimes. The heat from your hand is enough to directly go from solid to vapor. And so the iodine was also quite volatile. Some of it may have been in the form of other compounds. Some of it may have been elemental-- probably not likely. But there was certainly some iodine vapor. And about half of that was released. The problem is then it condenses out and falls on anything green, anything with surface area. So the biggest danger to the folks living nearby was from eating leafy vegetables because leaves got lots of surface area. Iodine deposits on them. And it's intensely radioactive for a month or so. Or depositing on the grass that cows eat, which led to the problem of radioactive milk. And so that's why milk in the Soviet Union was banned for such a long time because this was one of the major sources of iodine contamination. The other one, which we're worrying about now from Fukushima as well, is cesium, which has similar chemistry to sodium and potassium-- again, a rather salty compound, or rather salty element. But it's got a half-life of 30 years. And if we look it up in the table of nuclides, we'll see what it actually releases. Oh, good. It's back online. Anyone else notice this broken a couple days ago. AUDIENCE: Yeah. MICHAEL SHORT: Well, luckily, Brookhaven National Lab has a good version up too. But let's grab cesium. Yeah, there's plenty out there. Cesium-137. Beta decays to barium but also gives off gamma rays. And most of the decays end up giving off one of those gamma rays, let's say a 660-keV gamma ray. So it's both a beta and a gamma emitter. Now, which of those types of radiation do you think it's more damaging to biological organisms? The beta or the gamma? AUDIENCE: Gamma? MICHAEL SHORT: You say the gamma. Why do you say so? AUDIENCE: Doesn't beta get stopped by the skin and clothing? MICHAEL SHORT: It does. But if cesium is better known as-- AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yes. That's right. So did I get to tell you guys this question, the four cookies question? Yeah. You eat the gamma cookie because most gammas that are emitted by the cookie simply leave you and irradiate your friend, which is going to be the topic of pset number 8. You'll see. That's why you guys are getting your whole body counts. Speaking of, who's gotten their whole body counts at EHS? Awesome. So that's almost everybody. You will need that data for problem set 8. So do schedule it soon, preferably before Thanksgiving so that you'll be able to take a look at it. Has anyone found anything interesting in your spectra? Good. Glad to hear that. But you do see a potassium peak that you can probably integrate and do some problems with, right? Yeah, because you will. OK. Anyway, yeah. It's the betas. That's the real killer. The gammas are going to leave the cesium, enter your body, and most likely come out the other side. Because the mass attenuation coefficient of 6-- what is it? Water for 660-keV gammas. Let's find that. Table 3. Let's say you're made mostly of water. Water, liquid, that's pretty much humans. 660 keV is right about here leading to about 0.1 centimeter squared per gram. And with a density of 1 gram, that's a pretty low attenuation of gammas. So this chart actually shows why most of the cesium gammas that would be produced from ingestion just get right out. But it's the betas that have an awfully short range. Anyone remember the formula for range in general? So this is going to come back up in our discussion of dose and biological effects. Integral, yep, of stopping power to the negative 1. And that's stopping power is this simple formula. Let's see. What did that come out as? Log minus beta squared. That simple little formula, which I'm not going to expect you guys to memorize. So don't worry about it. But if you integrate this, you find out that the range of electrons, even 1 MeV electrons, in water is not very high. So most of them are stopped near or by the cells that absorb them doing quite a bit of damage to DNA, which is eventually what causes mutagenic effects-- cancer, cell death, what we're going to talk about for the whole third part of the course. There's also a worry about which organs actually absorb these radionuclides. And iodine in particular is preferentially absorbed by the thyroid. So when we started looking at the amount of radioactive substances released-- remember they said, OK, at around the 26th of April or the 2nd of May or so the release was stopped? Not according to our data. That's when the graphite fire picked up again. In addition, the core of Chernobyl, which had undergone a mostly total meltdown, was sitting in a pool on top of this concrete pad. So let's just call this liquid stuff-- the actual word that we use in parlance is called corium. It's our tongue-in-cheek word for every element mixed together in a hot radioactive soup. First of all, it started to redistribute, reacting with any water that was present, flashing it to steam. And the steam caused additional dispersion of radionuclides. And eventually, it burrowed its way through and into the ground, releasing more. It's the worst nuclear thing that's ever happened in the history of nuclear things. Quite a mess. And luckily, it did sort of taper off after this. But let's now look into what happens next. And this is the nice intro to the third part of the course. Iodine is preferentially uptaken by the thyroid gland somewhere right about here. So has anyone ever heard of the idea of taking iodine tablets in the case of a nuclear disaster? Anyone have any idea why? If you saturate your thyroid with iodine, then if you ingest radioactive iodine, it's less likely to be permanently taken by the thyroid. So this actually provided some statistics on the probability of getting thyroid cancer from radioactive iodine ingestion. Luckily, the statistics were quite poor, which means that not many people were exposed. It was somewhere around 1,300 or so, not like millions. Yeah, 1,300 people total. But what I want to jump to is the dose-versus-risk curve. And this is going to belie all of our discussion about the biological long-term effects of radioactivity. What's the most striking thing you see as part of this curve? AUDIENCE: Error bars. MICHAEL SHORT: That's right. That's the first thing I saw. There are six different models for how dose an increased risk of cancer proceeds. And they all fall within almost all the error bars of these measurements. I say, again, thank God that the error bars are so high because that means that the sample size was so low. So when folks say we don't really know how much radioactivity causes how much cancer, they're right because, luckily, we don't have enough data from people being exposed to know that really, really well. So some folks say we should be cautious. I kind of agree with them. Some folks say the jury's still out. I also agree with them. But you can start to estimate these sorts of things by knowing how much radiation energy was absorbed and to what organ. So I think the only technical thing I want to go over today is the different units of dose. Because as you start to read things in the reading, which I recommend you do if you haven't been doing yet, you're going to encounter a lot of different units of radiation dose ranging from things like the roentgen, which responds to a number of ionizations. You won't usually see this one given in sort of biological parlance. Because it's the number of ionizations detected by some sort of gaseous ionization detector. So the dosimeters is that you all put on-- did you guys all bring these brass pen dosimeters in through the reactor? Did anyone look through them to see what the unit of dose was? It's going to be in roentgens because that's directly corelatable to the number of ionizations that that dosimeter has experienced. You'll also see four dose units, two of which are just factors of 100 away from each other. There is what's called the rad and the gray. And there's what's called the rem and the sievert. You'll see these approximated as gray. You'll see these as R. And these are just usually written as rem. So a rad is simple. Let's see. 100 rads is the same as 1 gray. And 100 rem is the same as 1 sievert. And for the case of gamma radiation, these units are actually equal. I particularly like this set of units because this is the kind of SI of radiation units because it comes directly from measurable calculatable quantities. Like the gray, for example, the actual unit of gray is joules absorbed per kilogram of absorber. It's a pretty simple unit to understand. If you know how many radioactive particles or gammas or whatever that you have absorbed, you can multiply that number by their energy, divide by the mass of the organ absorbing them, and you get its dose in gray. Sievert is gray times some quality factor for the radiation times some quality factor for the specific type of tissue. What this says is that some types of radiation are more effective at causing damage than others. And some organs are more susceptible to radiation damage than others. Does anyone happen to know some of the organs that are most susceptible to radiation damage? AUDIENCE: Soft tissues. MICHAEL SHORT: Soft tissues like what? Because there's lots of those. AUDIENCE: Stomach lining. MICHAEL SHORT: Stomach lining. Yep. Yeah? AUDIENCE: Lungs. MICHAEL SHORT: Lungs. Yep. What else? AUDIENCE: Thyroid. MICHAEL SHORT: Thyroid. Yep, there is definitely one for thyroid. AUDIENCE: Bone marrow. MICHAEL SHORT: Bone marrow. What other ones? Brain, actually not so much. The eyes. And where else do you find rapidly dividing cells in your body? AUDIENCE: Skin. MICHAEL SHORT: Skin. Yep, the dermis. AUDIENCE: The liver? MICHAEL SHORT: I don't know about the liver. I would assume so. Yeah, it's a pretty active organ. But when folks are worried about birth defects, reproductive organs. The link here that, for some reason, is not said in the reading, and I've never figured out why, is the more often a cell is dividing, the more susceptible it is to gaining cancer risk. Because every cell division is a copy of its DNA. And any time that radiation goes in and damages or changes that DNA by either causing what's called a thiamine bridge where two thiamine bases get linked together or damaging the structure in some other way, that gene is then replicated. And the faster they're replicating, the more likely cancer is going to become apparent. I guess this brings up a question. When does a rapidly dividing cell become cancer? Is it division number 1 or is it when you notice it? I guess I'll leave that question to the biologists. But if you notice, in the reading, you'll see a bunch of different tissue equivalency factors. And you'll just see them tabulated and say, there they are. Memorize them. I want you to try and think of the pattern between them. The tissues that basically don't matter, like the non-marrow part of the bone, dead skin cells, muscles, things that basically aren't listed that much, they're not dividing very fast. But anywhere where you find stem cells, the lining of your intestine, your lungs which undergo a lot of environmental damage and need to be replenished, gonads, dura, skin-- what was the other one that we said? Eyes. These are places that are either sensitive tissues or they're rapidly dividing. And so the sievert is kind of in a unit of increased equivalent risk so that, if you were to absorb one gray of gamma rays versus one gray of alphas, you'd be about 20 times more likely to incur cancer from the alphas than the gammas because of the amount of localized damage that they do to cells. And we'll be doing all this in detail pretty soon. And then for tissue equivalency factor, if you absorb one gray and your whole body, which means one joule per kilogram of average body mass, versus one gray directly to the lining of your intestine by, let's say, drinking polonium-laced tea like happened to a poor-- who was it? Current or ex-KGB guy or the Russian fellas? No, it was the KGB guys that poisoned him, right? Yeah. Do you guys remember back in 2010 or so? There was a Russian-- was he a journalist? AUDIENCE: Actually, he was ex-KGB. MICHAEL SHORT: Ex-KGB. So the current KGB somehow got into London and slipped polonium into his tea at a Japanese restaurant. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Really? AUDIENCE: I think so, right? [INAUDIBLE] It was unsuccessful. MICHAEL SHORT: What was his name? Let's see. The polonium poisoning. Did he actually die? Poisoning of Alexander Litvinenko. AUDIENCE: That's pretty close to dead. MICHAEL SHORT: He's not doing too well. Illness and poisoning, death, and last statement at the hospital in London. So yeah. AUDIENCE: He probably said something awesome. AUDIENCE: What did he say? MICHAEL SHORT: Well, interesting. That probably has something to do with it. AUDIENCE: That's a lot of-- a really long last-- MICHAEL SHORT: Yeah? Well, we're not going to comment on the politics. But the radiation effect worked, clearly, unfortunately. So polonium is an alpha emitter. And that caused a massive dose of alphas to his entire gastrointestinal tract. And that caused a whole lot of damage to those cells. No time for cancer. It actually killed off a lot of those stem cells. And the way that radiation poisoning would work is that, if you kill off the stem cells, the villi in your intestines die, which are responsible for absorbing nutrition. You can't uptake nutrition. You basically starve. It doesn't matter what you eat. It's messed up. Yeah. That's a really bad way to go. It's called gastrointestinal syndrome. And we'll be talking about the progressive effects of acute radiation exposure where you have immediate effects mostly relating to the death of some organ that is responsible for either cell division to keep you alive or, in extreme cases, your neurological system. And nerve function just stops at the highest levels of dose. And that corresponds to doses of around 4 to 6 gray. 4 to 6 joules per kilogram of villi, or body mass, will kill you pretty quickly with very little chance of survival as what happened here. And so this was the problem. With all the folks living around and near Chernobyl and Ukraine and Belarus and everywhere was the contamination was pretty extensive. About 4,000 people are estimated to have died or contracted cancer from this. I can't believe how low that number is. But it's still 4,000 people that should've never happened to. And effects were felt far away in towns like Gomel and-- can't read that one because there's not enough pixels. Because of the way that, let's say, rainwater-- or let's say the vapor cloud from the reactor was-- the way rainwater caused it to fall on certain places, which still, to this day, can have a really large contamination area. And this brings me a little bit into what should we be worried about from Fukushima-- a whole lot less than Chernobyl. And the reason why is Fukushima did undergo a hydrogen explosion and did and still continues to release cesium-137 into the ocean. Luckily, for us, the ocean is big. And except for fish caught right near around Fukushima, even though concentrations can be measured at hundreds to thousands of times normal concentrations, they can still be hundreds to thousands of times lower than the safe consumption. So a lot of the problems you see in the news today, I'm not going to call them lies. But I'm going to call them half truths. Folks will show the radiation plume of cesium-137 escaping from Fukushima. And that's true. There is radiation escaping. The question is, is it high enough to cause a noticeable increased risk of cancer? That's the question that reporters shouldn't be asking themselves. When they only tell the half of the story that gets them viewers and they don't tell the half of the story to complete the story and tell you, should you be afraid or not? Because unfortunately, fear brings viewers. This is the problem-- and I'm happy to go on camera saying this. This is the problem with the media today is, with a half truth and with a half story, you can incite real panic over non-physical issues that may not actually exist. And so it's important that the media tell the whole story. Yes, it's true that Fukushima's releasing cesium-137. How much though is the question that people and the media should be asking themselves. And in the rest of this course, we're going to answer the question, how much is too much? So I'm going to stop here since it's 2 of 5 of and ask you guys if you have any questions on the whole second part of the course or what happened in Chernobyl. Yeah. AUDIENCE: Yeah. Could you explain the quality factor term and how you find that? MICHAEL SHORT: Yeah. Well, there's two quality factors. There is the quality factor for radiation, which will tell you, let's say, how much more cell damage a given amount of a given type of radiation of the same energy will deposit into a cell. And the tissue equivalency factor tells you, well, what's the added risk of some sort of defect leading to cell death or cancer or some other defect from that radiation absorption. So to me, the tissue equivalency factor is roughly, but not completely, approximated by the cell division rate. And the radiation quality factor is going to be quite proportional to the stopping power. You'll see a term called the Linear Energy Transfer, or LET. This is the stopping power unit used in the biology community. It's stopping power. And luckily, the Turner reading actually says it's somewhere buried in a paragraph. LET is stopping power. So if you start plotting these two together, you might find some striking similarities. I saw two other questions up here. Yeah? AUDIENCE: Why is Chernobyl still considered off limits if most the half-lives of these things are on the range of days to two years? I mean, it happened-- MICHAEL SHORT: Let's answer that with numbers. So most of the half-lives were on the range of days to hours. But still, cesium-137, with a half-life of 30 years, released a third of an exabecquerel. That's one of the major sources of contamination still out there. In addition, if we scroll down a little more, there was quite a bit of plutonium inventory with a half-life of 24,000 years. So on Friday, we're going to have Jake Hecla come in and give his Chernobyl travelogue because one of our seniors has actually been to Chernobyl. And his boots were so contaminated with plutonium that he could never use them again. They've got to stay wrapped up in plastic. So some of these things last tens of thousands of years. And even though there weren't a lot of petabecquerels of plutonium released, they're alpha emitters. And they're extremely dangerous when ingested. So greens and things that uptake radionuclides from the soil like moss and mushrooms are totally off limits in a large range of this area. You will find the video online, if you look, of a mayor from a nearby town saying, oh, they're perfectly safe to eat. Look, I eat them right here. And I just say read the comments for what people have to say about that. Not too smart. Yeah. AUDIENCE: So what's the process now for taking care of [INAUDIBLE]? MICHAEL SHORT: So the sarcophagus around the reactor has got to be shored up to make sure that nothing else gets out. Because most of the reactor is still there. And let's say rainwater comes in and starts washing away more stuff into the ground or whatever. We don't want that to happen. Soil replacement and disposal as nuclear waste is still going on. Removal of any moss, lichen, mushrooms, or anything with a sort of radiation exposure has got to keep going. But the area that it covers is enormous. I don't know if we're ever going to get rid of all of it. The question is, how much do we have to get rid of to lower our risk of cancer in the area to an acceptable rate? There will likely be parts of this that are inaccessible for thousands to tens of thousands of years unless we hopefully get smarter about how to contain and dispose of this kind of stuff. We're not there yet. So right now, the methods are kind of simple. Get rid of the soil. Fence off the area. Some folks have been returning. And they do get compensation and free medical visits because the background levels there are elevated but not that high. So folks have started to move back to some of these areas. But there's a lot that are still off limits. Any other questions? Yeah. AUDIENCE: It's way worse than the atomic bombs dropped on Hiroshima and Nagasaki because those are full-functioning cities at this point. MICHAEL SHORT: Yeah. The number of deaths from the atomic bombs way outweighed the number of deaths that will ever happen from Chernobyl. AUDIENCE: But why is the radiation from those bombs not-- MICHAEL SHORT: Oh, not that much of an issue? There wasn't that much material. There wasn't that much nuclear material in an atomic bomb. What did you guys get for the radius of the critical sphere of plutonium? AUDIENCE: [INAUDIBLE] centimeters. MICHAEL SHORT: Centimeters? Yeah. It doesn't take a lot. It takes 10, 20 kilos to make a weapon. Now, we're talking about tons or thousands of tons of material released. So an atomic weapon doesn't kill by radiation. It kills by pressure wave, the heat wave. The fallout is not as much of a concern. And we'll actually be looking at the data from Hiroshima and Nagasaki survivors to see who got what dose, what increased cancer risk did they get, and is the idea that every little bit of radiation is a bad thing actually true. The answer is you can't say yes or no. No one can say yes or no because we don't have good enough data. The error bars support either conclusion. So I'm not going to go on record and say a little bit of radiation is OK. They data is not out yet. Hopefully, it never will be. Any other questions? All right. I'll see you guys on Thursday. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 32_Chemical_and_Biological_Effects_of_Radiation_Smelling_Nuclear_Bullshit.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit mitopencourseware@ocw.mit.edu. MICHAEL SHORT: So, today we're going to get into the most politically and emotionally fraught topic of this course for stuff on chemical and biological effects of radiation. Now that you know the units of dose, background dose, we're going to talk about what ionizing radiation does in the body, to cells, to other things, and we're going to get into a lot of the feelings associated with it. And by the end of this lecture, or Thursday, I'm going to teach you guys how to smell bullshit. Because we're going to go through one of millions of internet articles about things that cause cancer, that don't cause cancer. In this case, it's going to be radiation from cell phones. So I'm going to try to reserve at least 10 minutes at the end of this class for us to go through a bunch of quote, unquote, studies and misinterpretations of those conclusions. And I was going to pick my favorite of the 44 studies, and looking through them all, my favorite are all of them. AUDIENCE: [LAUGHTER] MICHAEL SHORT: So we'll see how many we can get through. But let's get into the science first, so you can understand a bit about what goes on with ionizing radiation. Like radiation damage in materials, radiation damage and biological systems is an extremely multi-time-scale process. Everything from the physical stage, or the ballistic stage, of radiation damage to biological tissues acting on femtoseconds, where this is just the physical knocking about atoms and creation of free radicals, these ionized species, which in metals you wouldn't care about, in biological organisms you do because then they undergo chemical reactions from the initial movement and creation of other strange radiolytic species and the diffusion and reaction of those things, which starts and finishes in about a microsecond, before most of these things are neutralized. And then, later on, the buildup of those oxidative byproducts of these chemical reactions undergo the biological stages of radiation damage. All of the free radicals with biological molecules have reacted within a millisecond. So radiation goes in, a millisecond later the damage is done. Then you start to affect, let's say, cell division. It takes, on average, minutes for a rapidly dividing cell to undergo a division. That's when the effects would first be manifest from a DNA mutation. But then it'd take things like weeks, or years for these sorts of things to manifest in a health-related aspect. So, the division of one cancerous cell into two won't change the way your body functions, but the doubling in size of a tumor that blocks other tissue absolutely would. And so, it all starts in this sub-femtosecond regime, when most of you-- well, for this entire year, we've been approximating humans as water. We're going to continue to do so for the purposes of these biological effects. So, let's say you, a giant sack of water, gets irradiated by a gamma ray. And that gamma ray undergoes Compton scattering. Which, now you know how to tell what the energy of the Compton electron would be. We never talked about what happens with the molecule where it came from. That molecule remains ionized. And since you're not especially electrically conductive, they're not neutralized immediately. And you can be left over with either a free radical or an electron in an excited state. And then what happens next is the whole basis of radiation damage to biological organisms. These free radicals can then encounter other ones, and let's say an H2O+, can very quickly find a neighboring water molecule, which they're almost touching and form OH and H3O. This is better known as H+, and that OH is a kind of unstable molecule. And these excited electrons here can also become these H2O+'s, leading to this cascade of what we call radiolysis reactions. There's a few of them listed here, things like an OH plus an aqueous electron, which could come from anywhere, like Compton scattering, like any other biological process that frees an electron, can make another OH-. So you can locally change the pH inside the cell that you happen to be irradiating. Or, let's say any of these oxidative byproducts could encounter DNA. Rip off or add an electron to one of the guanine, thymine, or other two or three bases in DNA or RNA, then you've changed the genetic code of the cell. In the progression of these radiologists byproducts, like I mentioned, whether you go by excitation or ionization, then you start to build up these six species-- these five species tend to be-- or these six ones tend to be the ending byproducts of a whole host of radiolysis reactions. And don't worry, you're never going to have to memorize all the radiolysis reactions because the mechanism map is fairly complicated and there are multiple routes to creating each one. But the ones that are highlighted here in these squares, are the ones that end up building up in your body, things like peroxide. Has anyone ever put peroxide on a wound before? What happens? Yell it out. AUDIENCE: It bubbles up. MICHAEL SHORT: Bubbles up. What happens when you form peroxide in your body from radiation? AUDIENCE: It bubbles up. MICHAEL SHORT: Well, luckily it doesn't quite bubble up on the macro scale level, but it is a vigorous oxidizer. 90% H2O2 is used as rocket fuel, as the oxidizing species in rocket fuel. You don't make 90% H2O2 from getting irradiated, but every molecule counts. Things like O2, you're shifting the amount of oxygen in the cells. And then there's things like these superoxide radicals, or H2O-, H2O+, or all these other things that are available to rip off or add an electron to something else that normally wouldn't have it. And the list of these potential reactions, as well as their equilibrium constants and activation energy, is huge. Here's half of it. Notice a lot of these equilibrium constants shift really strongly one way or the other. So, just because these molecules are made, doesn't mean that all of them end up staying and doing damage. But unless these rate constants are either 0 or infinity, there's going to be some dynamic equilibrium of these reactions. So, once in a while, some of these free radicals will escape the cloud of chemical change and charge and get to something else. Here's the other half of the equation set. And it's under debate just how many of these reactions there actually are. Like, how often would O2- radicals combine with water, which you can see is not quite set in the reaction, to form [? HO2 - NO2 NH+ ?] Kind of a strange little reaction right there. Actually, a lot of them are quite strange. You don't usually think of them happening because these are very transient reactions, whose byproducts do build up. And that's the chemical basis for radiation damage to biological tissues. Now, once those chemical products form, they have to move or diffuse. So you can actually calculate or get diffusion coefficients for some of these oxidizing species, as well as compute an average radius that they'll remove before undergoing a reaction. So this is part of the basis for why alpha radiation is a lot more damaging than gamma radiation. Chances are, if you incorporate an alpha emitter into the cell, it does a whole bunch of damage. That damage consists of these oxidative chemical species, that, if they're that far away from neighboring atoms that happen to be in DNA, they might do some damage. Whereas, isolated Compton scatters and photoelectric exhortations from gamma radiation, not so much. Chances are you hit random water in the cell that isn't quite close to anything, fragile, and not much happens. But you can also see this by looking at charged particle tracks. These things can actually be experimentally measured. By firing electrons into gel or film or something like that, you can actually see tracks of ionization and watch them as a function of time. In this case, it's a simulation of a charged particle track at different timescales. So, right here, this 10 to the minus 12 for the time in seconds, tells you where these radiolysis products are. And the N number, here, tells you how many of those remain. So after a picosecond, you can pretty much just trace out the path that the electron took, starts off right here. What do you guys notice about the density of the charged particle track as it moves from the source to the end? AUDIENCE: It's much more dense at the end. MICHAEL SHORT: It's much more dense at the end. And why do you think that is? AUDIENCE: Stopping power. MICHAEL SHORT: OK. More than just-- yeah. Stopping power, yes, but fill in the beginning and end of that sentence. Chris, do you have your hand up? AUDIENCE: [? It's all good. ?] So, it's a charged particle, so it drops off most of it's energy where it has the least amount of energy, so it does the most damage [INAUDIBLE].. MICHAEL SHORT: That's right. So, you're actually visualizing the change in stopping power as a function of charged particle energy. It comes in, has a very high energy. And it might knock a little radiation damage cascade by hitting another electron, which can have its own shower of ionization. And then it moves while doing nothing, in this straight line, until it hits another one. And notice right at the end, that's where the densest amount of damage is done because that's where the stopping power is the highest. It's also where the energy is the lowest. So, this is where the worlds of and physics collide. You can actually visualize stopping power, like actually visually in gel or on film or on a computer by watching these charged particle tracks. And after 10 to the minus 12 seconds, all the ballistics are over. Then you end up with diffusion and reaction. So, it's going to be a balance between these charged particles moving away from each other and finding something else, or finding each other and re-combining. And that's why, as you go up in timescale, the particle tracks get more and more diffuse and the number of these remaining free radicals goes down until you level out at about a microsecond, when all of the different particles are so spread out that there are none touching each other anymore. To refresh your memory a bit from a few seconds ago, take a look at some of the charge states of these oxidative byproducts. Some of them plus, some of them minus, sum of them excited, all over the place. So they can react with each other, which is something you'd want to encourage so that they don't go and find something else, causing biological damage. There's a question on last year's OCW problem set, that I'm not giving you for this one, which is, calculate the radiation resistance you would get by getting cryogenically frozen. So here's a question that I don't think a lot of cryogenicists ask themselves, if you want to preserve a human for 10,000 years and wake them up later, how much radiation damage are you going to get? Ever think there's a cryogenicist that ask themselves that question? I don't actually know. But it's not a question I've ever heard before, which is why I made it a problem set question. Because I know the answer is not out there. I looked for a while. Let's switch particles for a second and look at the charged particle tracks from a proton. What differences do you see between the proton and the electron charged particle track? So, proton, electron. Proton, electron. AUDIENCE: There's no curve. MICHAEL SHORT: There's what? AUDIENCE: There's curve. It's straight. MICHAEL SHORT: Its straight. Why do you think it's straight? Why does anyone think it's straight? AUDIENCE: They're bigger. MICHAEL SHORT: They are bigger, more massive. So the same deflection, the same transfer of momentum, to an electron using our beloved hollow cylinder approximation thing, causes less of a change in direction for a proton as it does an electron. The forces are the same. They're both just a plus or minus 1 hitting a plus or minus 1 charge. But the mass is quite different on the proton, so it doesn't get deflected as much, which is why the charged particle tracks are so straight. Now, what are these things here? What are those offshoots? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: They're secondary charged particle tracks. So, let's say a proton hits an electron, that electron can have any amount of energy, probably going to be lower than the proton did. And it's going to cause its own little damage cascade right there. And, just like before, you can track the number of these charged particle trucks moving from 5000 to about 1000, between, let's say, 10 picoseconds and a little less than a microsecond. And once these charged particles have spread out or diffused away, chances are recombination has gone down quite a bit and they're going to go react with other things. And this is a perfect analogy to radiation damage in metal. So, radiation damage in biology is like radiation damage in material science. You have this initial cluster of damage, in materials it's usually vacancies or intestinals, in biology it's charged particles. But when they're in a dense cascade they can recombine with each other. And the ones that miss each other go off to find either other defects in the material or other atoms in your cells. It's a very fitting analogy. Yeah? AUDIENCE: How come we don't see like a denser [INAUDIBLE] to the proton [INAUDIBLE] electrons? MICHAEL SHORT: Let's see. I don't know if we see the whole charged particle track here. You're right, it doesn't look like the density changes very much. You can't even really tell where the source is. We may not be looking at the whole thing. Here's another question. So, it's a 2 MeV proton. That scale bar is 0.1 microns. Let's do a quick simulation to verify this idea. Luckily we have the tools to do this, soon as I clone my screen. Let's use SRIM and find out what is the range of 2 MeV protons in water. And if it's more than about a micron, which is what's shown-- well, let's say, that's 2 microns. If it's more than 2 microns, what's shown on the screen, it means we're not seeing the whole track. SRIM. Good, you can see it. So let's say, hydrogen at 2 MeV, going into something consisting of H and O in a ratio of 2:1, make sure its density is correct for room temperature water, and let's look at a range of 25 microns, because I kind of already know the answer. AUDIENCE: [LAUGHTER] MICHAEL SHORT: Much more than 25 microns. So, our initial assertion was correct. Let's actually find out what the range is. Let's put 40 microns. Whew, it's a little more than I thought. Protons in water, at just 2 MeV. Let's fly tons of them. Wait til we get about 1,000. Look at the range. Make it bigger so you can read it. 75 microns. 75.5 micron range. There you go. Let's go back to the big one. So, there you go. If this scale bar is 0.1 microns, you're looking about 2 of the 75 microns of charged particle track. Interesting, no one picked up that question last year, but I'm glad you did. I'm glad we were able to show you where it comes from. So this will look quite different if you're looking at the end of the charged particle track. Cool. Good question. To look really, really close up, you see a lot more of this branching again. So whenever a proton strikes, let's say another atom or an electron, you get your own little dense damage cascade. And look at that, not much until the very end when you get this cloud of damage popping off at the end. So, yet more examples of the physics that you've learned popping up in biological systems. The difference is it's water not metal, but otherwise everything's the same. And then we get to what's called G-values. I don't know why it's called G, but I'll tell you what they mean. It's the number of each species, per 100 MeV, found later, at let's say, 0.28 microseconds, or typically 1 microsecond, for different particles of various energies. These are relative effectiveness' of these particles at different energies to leave oxidative byproducts by. So there's a few things that are wrapped up into these G-values. So, notice that, in this case, here's a G-value for electron energy. At different energies, you'll have different amounts of OH, H3O in such, per 100 eV of energy. So the unit of G-values here, it's like number of chemical species per 100 eV of energy. So it's an energy normalized measure of the effectiveness of radiation making chemicals. Does make sense to folks? If not, raise your hand and I'll try to re-explain. OK. AUDIENCE: Please repeat it. MICHAEL SHORT: Yep. So a G-value, it's got units in concentration per unit energy. And it's a measure of how many chemicals a given particle will make as a function of its energy. And these particles are the ones that survive the recombination and end up diffusing to other species. So, these G-values, it's kind of like how many oxidative species are made that go off and damage other things? Let's look at some trends right here. For things like OH, for electrons, what sort of patterns do you notice in the data? And take a sec to parse some of these numbers. Just look at the top three rows. What pattern do you see? AUDIENCE: Starts high and then goes-- MICHAEL SHORT: Starts high, goes low, goes high again. Why do you think that is? Straight from the physics. At super low energies, 100 eV electron, you'll make, on average, 1 OH radical for every 100 eV of energy. As you increase in energy, you start making fewer and fewer per unit-- actually, that's not the one I want to look at. That's a different species. Let's see. No, that is. OK. That follows the pattern that we're looking for. AUDIENCE: Does the high energy includes stuff that's created from causing secondary cascades? MICHAEL SHORT: Oh, yeah. This is just total number from everything. Right? It's just the number of each chemical species left over after a microsecond. So what do you think could cause this initial increase and then decrease and then increase? AUDIENCE: Is it because of the cross-sections of different particles? MICHAEL SHORT: Part of it. The cross-sections that also go into the stopping power. That's part of the answer. So at really low energies, you're already at your stopping power peak. And that way, for the little bit of energy you have, chances are it's going to ionize different things. Then as you increase your energy, you have more and more of that range of the particle in the lower stopping power region. So, you'll have more of the-- let's see. You'll have more and more of that particle-- let me try and phrase this quite well. Let's go back to the charged particle tracks for electrons, and I'll get this-- yeah, here we go. So, when you're electron comes in a really, really low energy, you're in that region right there. Chances are you're going to make a lot of those oxidative byproducts. And then as you go a little higher in energy, you make fewer per unit distance-- or you make fewer per unit energy. You can think of that as the spread, right there. But then also, as you go way higher in energy, your ability to ionize increases. So you've got that sort of 1 over E term in stopping power making things worse. And you've got that log of E term in stopping power making things better. And if we go back to the data right here, for those top three or four, it tends to follow that trend pretty well. Now what about things like H202? What sort of trend do you see there? AUDIENCE: The opposite. MICHAEL SHORT: The opposite. So, I'll give you a hint. H2O2 isn't directly made by radiolysis, it tends to occur by reaction of other radiolysis products. So it's like a secondary chemical, not a primary produced chemical. So, why do you think H2O2 follows the opposite trend? AUDIENCE: It comes from the-- not the decay, but like a reaction from one of the previous ones, that there's more of that first species there, that it hasn't reacted to form it yet. But once it is lowered, that means it's made more of the H2O2. MICHAEL SHORT: Sure. AUDIENCE: And then vice versa. MICHAEL SHORT: Yeah. So, to rephrase what Sarah said, in this energy range right here, you're producing this fairly dense cascade of oxidative byproducts. When those reactions occur, they tend to make things like H2O2, something that's not made directly from radiolysis, but indirectly from recombination of those chemicals. And then as you raise the energy more and more, to like 20 keV, you start making those primary products more spread out. They're not as close to each other. They don't recombine as much. They don't make as much H2O2. They'll tend, instead, to spread out a little more. So more will survive. More of these primary ones will survive, and not react to make as many of the secondary ones. So, how is that explanation fitting with you guys? Cool. So, it's a balance between intermediate energies. You make a whole lot of primary ones, which are so close that they react to make the secondary species much more easily. As you raise the energy of the particles going in, you make more isolated primaries that can't find each other, and they don't make as many secondaries per unit energy. Yeah? AUDIENCE: How come for like the 100 eV H2O2 it's less? Because since it's making a lot of the initial, or the primary, byproducts, wouldn't you expect it to also make a lot of the secondary because they're also close together? MICHAEL SHORT: You might, except at very low energies, our idea of stopping power isn't quite as complete. So, by what other processes can electrons lose energy at really low energies? You could have a deflection without an ionization, right? Just a simple-- let's say, you could have an excitation, you could have just coulomb deflection, you can have neutralization. You can have all those really, really low energy things that go on, that don't end up producing as many ionizations. Because you need to produce an ionization or an excitation to kick off radiolysis. So then, when you get high enough in energy, and chances are you'll ionize rather than undergo one of these really low energy inner loss mechanisms, then you start making more of the primaries, but densely, which make more of the secondaries. Then as you go even higher in energy, you still make tons of primaries, but since they're spread out more, since the stopping power is lower, they don't find each other and they don't make as many secondaries. So, let's look at some other numbers and trends, different particles. First of all, for protons and for alpha particles, note here that the scales are in MeV. Whereas, the G-Value is for electrons in the keV range, and for protons in the MeV range are pretty much the same, on the same order of magnitude. Anyone have any idea why? AUDIENCE: They're heavier. MICHAEL SHORT: They're heavier. And then what does that lead to in terms of a stopping power? AUDIENCE: They're easier to stop. MICHAEL SHORT: They're actually harder to stop. If they're heavier, than the deflection of an electron doesn't stop them as much. And so that way, more of these proton and alpha radiolysis products are going to be more spread out. So you get the same number per 100 MeV, in the MeV range, as you do for electrons at a much lower energy. But then alphas also have this interesting thing that they're doubly charged, so that those coulomb forces, remember it's by Z squared, so it's four times as strong. So, let's see, how do they compare? Yeah. There aren't really enough data to draw those nice trends that you could see from electrons. But we do have some other interesting trends in the G-values as a function of temperature. So these right here are G-values for H and OH by gamma rays, which are two primary species. And here we've graphed them as a function of temperature. Why do you think the G-values, or the amount of radiolysis products that survive a microsecond, increase with temperature? What's this a competing force or a balance between? So once these products are made, what are the two things that they can do? Anyone? AUDIENCE: Recombine or diffuse. MICHAEL SHORT: Recombine or diffuse. Good. Which of these will increase much more strongly with temperature? AUDIENCE: Diffusion. MICHAEL SHORT: Diffusion. If they spread out more at higher temperature, then they'll separate from each other and not recombine as much. So a whole bunch will be made, no matter what, in a matter of femtoseconds. But at a higher temperature, more of them diffuse away from each other and survive the cascade, rather than recombining. And so that's why, when you look at any primary species, H2 or H or anything like that, you're going to see an increase in G-values with temperature. What do you guys think is going to happen to these secondary byproducts with temperature? AUDIENCE: Decrease with temperature. MICHAEL SHORT: Decrease. And why do you say so? AUDIENCE: Well, if they're made from the primary products and the primary products are surviving more because they're separating, then the secondary ones are just going to be less. MICHAEL SHORT: Yeah. If the primary ones are surviving more, you're not going to make as many secondary ones. And that's just what we see. Number of free electrons left, or especially things like the amount of H2O2, it's all going to be in balance. And if more primaries survive, you don't make as many secondaries as a function of temperature. One, these heavy ones are slower to diffuse. But two, they're not made as much because the primaries escape each others pull and go off to damage something else. In a reactor, this would be metals causing oxidation. In a body this would be you. And so let's get into the materials aspect of this to give you a more-- a less biologically damaging view of what can radiolysis really do. It's quite relevant to all reactors, including the Fukushima reactor. The idea there is that the reactor was flooded with seawater, which introduces chlorine, which greatly changes the balance of radiolytic byproducts. And this can actually be directly studied. There's an experiment just a few years ago-- two years ago, where they wanted to figure out what is the influence of radiolysis on corrosion? If you're making all of these Hs and OH-s and H2O+s, does it change the corrosion rate of materials in the reactor? So they built a high-pressure cell, that they fill with high-pressure, high-temperature water. And they've got this little disk of metal with a thin membrane right there. It's thin enough that protons can pass through it and cause radiolysis to occur right in this little pocket where the water is. And so where the protons are, you get radiolysis. Where the protons aren't, you get regular old water corrosion. And the results are pretty astounding. You can see the irradiated zone in extra oxide thickness. So you can see where the protons were because radiolysis sped up the corrosion rate as a single effect. Right nearby, not 100 microns away, was the same water, at the same temperature and pressure, just no protons and no radiolysis. To look at a cross-section, you can very clearly see the difference in oxide thickness way out in the unirradiated zone or in the irradiated zone. And you can tell right here how many protons there were, until right over here where there were none. So it's a very striking example of, well, this is what radiolysis does in reactors. And we actually do things in reactors to suppress radiolysis. We inject hydrogen gas. So there's a hydrogen gas overpressure injected. One of the main reasons is to suppress radiolysis. Because if I jump back to any of these reactions, a lot of them involve H2. And if you dump a whole bunch of H2 into the reactor, you push the reaction backwards in the other direction. From straight up chemistry, if you add a reactant and add a product, you push the equilibrium in the other direction. That's why we do this in terms of injecting hydrogen into light water reactors. And if you look at the amount of hydrogen injected in a PWR, a pressurized water reactor, which comprises 2/3 of the reactors in the country, it's like 20 to 30 cubic centimeters per kilogram of dissolved hydrogen. That's quite a bit. And the whole idea there is to suppress radiolysis and suppress corrosion. So I find it to be pretty cool. So a knowledge of G-values can keep your reactor from corroding. Then let's get into the biological effects. In the end, for the long-term effect it's all about what happens to DNA. Because if a cell mutates, it can either kill the cell so that it can't replicate, or you can cause a mutation that might make some sort of a change and change the cell's function. And so you may imagine, a lot of this stuff is done in LET, linear energy transfer. Again, another word for stopping power. If you look at the density of these damaged cascades as a function of stopping power, LET. You can see that for high-energy electrons, or beta particles, they just bounce around with a lot of distance between interactions, causing very relatively little damage on the way. For Auger electrons, again electrons, but at a much lower energy. They're at the end of their stopping power curve and they cause a lot more damage wherever they're emitted because already, they're going to make a much denser damage cascade. Alpha particles just go slamming through. It's like rolling a tank through your cell pretty much. Because there's going to be a ton of interactions from charged particle interactions, you won't really change the path of that alpha because an electron imparts very little momentum to an alpha particle. And if DNA happens to be in the way, it's going to get damaged. This is a lot of the reason why there is relative effectiveness of different types of radiation. We talked last week about these quality factors, gamma rays are 1 electrons tend to be pretty close to 1 alphas tend to be 20. Because the same energy alpha particle will impart a ton more damage locally than the same energy beta particle. So can you guys see visually where these quality factors come from? Cool. And there's two types of DNA damage, direct and indirect. Direct damage is what you might think radiation comes in and ionizes something in the DNA, either causing, let's say, 2 thymine-based bridge, like a kink in the DNA, or destroying it or doing anything. But most of the damage is done indirectly because the amount of volume of DNA in your cells is extremely low. Has anyone ever done the old high school bio experiment, where you extract DNA from onions? AUDIENCE: Yes. AUDIENCE: Strawberries. MICHAEL SHORT: Strawberries. Anything? So how did you do it? Anyone remember how this was done? AUDIENCE: Some chemicals and stuff. AUDIENCE: You have to mix in good solution with a bunch of good stuff and [INAUDIBLE] MICHAEL SHORT: So you take, let's say, an onion, mix it in solution with a bunch of stuff, and you end up with this gigantic booger, which happens to be DNA. It's like a three-foot snot thing. But what was the volume of the DNA compared to the volume of the onion? AUDIENCE: Quite small. MICHAEL SHORT: Quite small. There's not a lot of DNA in cells. So the direct damage route, while still there, comprises very little of the damage done to tissue. Mostly it's indirect because surrounding all DNA is the rest of your cellular fluid, which consists mostly of water. And as we've seen all today, water undergoes radiolysis. Those radiolytic byproducts can diffuse, find their way to DNA, and cause the same sort of ionization that direct radiation would do. And since that volume is much larger, let's say the hollow cylinder of water surrounding your DNA, this is the most likely route to cellular damage. And-- Actually I want to skip ahead to something real quick, you can actually use that to your advantage because it can kill tumor cells. So tumors are rapidly dividing masses of cancer cells. If those cells are rapidly dividing, then DNA is being replicated much more readily. So you can inject something that will bind to DNA, like this little chemical right here, this Iodine-125, whatever, whatever, which mimics thymidine, something that would be found in your DNA, but absorbs radiation much better. So you can inject this iodine-containing organic molecule, which binds somehow to DNA. I'm not going to even guess how it works. But, if you want this to get damaged, then you want-- let's say, your DNA to get preferably damaged, the tumors are replicating faster, they're going to incur more damage from the same amount of radiation. So the same process that causes cancer can be used to cure cancer, interestingly enough. And so, good, we do have about 10 or 12 minutes to talk pseudoscience. So now that you know a little bit about how radiation can cause cancer and mutations and you know a lot of the physics behind how much energy do you need to cause an ionization, let's start knocking off these questions one by one. So, this field, more than any, is fraught with garbage, absolute garbage science. I won't even say pseudoscience because that almost makes it sound half legit. Garbage, misinterpretations, lies, poorly done studies, misinterpretations of abstracts and conclusions. And today I'd like to focus on cell phones and do they cause cancer? Very hot topic. There's lots of people with predetermined agendas that want to say all electromagnetic radiation is bad and we should go back to an agrarian society where nothing happened. Well, I'll give you a hint, Cambodia tried that and it didn't turn out too well. People have interesting notions of what's real and what's not. So let's start looking at some of these. There's an article written by this fellow, Lloyd Burrell, around November, 2014. It looks like it was republished somewhere in 2016. Let's just start looking at the facts. So, what I want to start doing here is cultivating your nose to be able to smell bullshit because this is a lot of what you're going to be doing, in terms of public outreach. As nuclear scientists you will be called on to provide expert advice and say whether things are real or not, explain why, and do it in an empathetic way so as not to make people feel stupid. Because it's very easy for someone to read this and think, yeah, I should be afraid. Cell phones cause cancer. It's a natural reaction to feel. Let's take a look at some of these facts. Cell phones emit microwave radio-frequency radiation. True or false? AUDIENCE: True. MICHAEL SHORT: True. Yeah. These are microwave emitters, or RF emitters. What sort of energy is microwave radiation emitted at? Just give me an order of magnitude, MeV, eV, keV. AUDIENCE: MeV? MICHAEL SHORT: Little MeV. Fractions of an eV. It's far beyond the visible range in the lower energy spectrum. Can a milli-electron-volt photon cause an ionization directly? AUDIENCE: No. MICHAEL SHORT: No. Microwaves and RF non-ionizing radiation. They can cook things by heating up water, but they do not cause ionizations the way that ionizing radiation does. This radiation has an ability to penetrate our bodies. True or false? AUDIENCE: Yeah, [INAUDIBLE] True. MICHAEL SHORT: True. It gets through us, right? Radio waves are going through us all the time. Our governments do virtually nothing to protect us from these dangerous. AUDIENCE: Technically, but what dangers? MICHAEL SHORT: Technically, true. Yeah. So this is a classic example of fear mongering, taking a bunch of facts, putting them together to elicit an emotional response that is incorrect. And because the emotional part of the brain kicks in far faster than logical part of the brain, that's how we're wired, it elicits a reaction with a predetermined conclusion. And yet, there is strong evidence, multiple peer reviewed studies-- I'm not even going to read the rest of the sentence because I don't want to go on record saying it as if it were true. Let's, instead, look at the studies, because that is the stuff that we should trust. AUDIENCE: [INAUDIBLE] 44 studies. MICHAEL SHORT: 44 studies cited. And let's look at some of the reasons. Let's see, there's a little bit-- I have to make it a little smaller. Can you guys still read that at the back? Or actually, no, make it a little bigger and forget the sidebar. That's better. OK. I was going to pick a couple of these to show you and I started going through them and my favorite ones are all of them. Most of the studies are perfectly legitimate, some of them are not. Most of the interpretations by this Lloyd fellow are absolutely wrong, and either done ignorantly, which somewhat forgivable, it can be hard to parse these studies, or intentionally. We don't know which one. Let's look here. "Telecoms giant," et cetera, "commissioned an independent study--" 404, not found. Let's go to the next one. We can't conclude anything from that. The Interphone Study found that: "regular cell phone use significantly increased the risk of gliomas," some type of tumor, "by 40% with 1,640 hours or more of use." Let's look at the key figure, taken from this paper, and blow it up so you can see it. What do you guys notice about this figure? AUDIENCE: [INAUDIBLE] AUDIENCE: It's so [INAUDIBLE]. MICHAEL SHORT: Forget the low resolution. We can't knock that because it might be a copy. No error bars. And what does most of this cell phone use-- and the unit not shown here is, I think it's like hours of use? AUDIENCE: It's all about the same. It's basically all the same. MICHAEL SHORT: Yeah. AUDIENCE: [INAUDIBLE] by any chance [INAUDIBLE] AUDIENCE: The never is actually closest to the 1. MICHAEL SHORT: Except for this one. Blue line is odds ratio. A lot of these things are given in OR, or odds ratio. Let's say the fractional-- or let's say the multiplying factor for increased risk of finding cancer in the variable group compared to the control group. And control and variable are interesting topics I want to make sure people have. So we have the Interphone Study cited in many of these papers. Let's see. OK. Garbage, garbage, opinions, opinions. Let's go find the study. This is something I wish people did more, is go to the study itself. Yeah, the Interphone Study. AUDIENCE: Overall, no increase in risk. [LAUGHTER] MICHAEL SHORT: We'll make this bigger to make it more obvious. So many people-- this article's been cited almost 500 times. I don't know in what capacity because I haven't looked up every citation. But a lot of what this site and other sites do is cite the Interphone Study to say cell phones cause cancer. Read the conclusion. AUDIENCE: Rise of an era. Prevent [INAUDIBLE] interpretation. MICHAEL SHORT: Yes. So this study is not a bogus study. The study was done correctly, reporting ORs, these odds ratios, with 95% confidence intervals. If you just look at the numbers itself, oh man, 1.15 odds ratio, 15% higher incidence of cancer, with a confidence interval that includes less and more. So you cannot conclude with 95% confidence that this data is correct. And the authors very honestly say, no conclusion can be drawn, require further investigation. What does this Lloyd fellow say? AUDIENCE: Cancer. MICHAEL SHORT: Cancer. Yeah. An either accidental or deliberate misinterpretation of the data. OK, let's go to numbers 2 and 3. I don't need those anymore. Let's see, number 2. Oh, we did number 2. Number 3, again from the Interphone Study. We can discount that because we've now read the conclusion of the study and looked at a bit of the difference. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Number 4, "Harmful Association Between Cell Phone Risk and Tumors." AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Let's see. AUDIENCE: It says there's possible AUDIENCE: Possible. Studies providing a higher level of evidence are needed [INAUDIBLE]. MICHAEL SHORT: Again, honest authors. I applaud the authors for taking a controversial topic, doing a fair bit of data, with at least enough metadata analysis, I think the sample size is OK, and then saying, higher level of evidence is needed. What does the internet say? It takes the one sentence that they want to support their predetermined conclusion. Very dishonest, if you ask me. Number 5. Oh, this is fun. OK. What does number 5 say? AUDIENCE: Does this not just make you angry? MICHAEL SHORT: Huh? AUDIENCE: Does this not just make you angry? MICHAEL SHORT: Yes it does make me angry. This is why I'm showing it to you. - infuriating, right? But some of the comparisons between what the folks on the internet will say with the sentence that they want to say- and then you go to the actual study, which they do give you the link for, "a consistent pattern of increased risk associated with wireless phones." What does the study say? Take a sec to parse this. I'll make it a little bigger. When you see an odds ratio of, let's say, greater than 1. And see a confidence interval-- AUDIENCE: Oh, holy crap. AUDIENCE: Oh! [INAUDIBLE] MICHAEL SHORT: Yeah. Again, another odds ratio and another confidence interval. Another odds ratio, another confidence interval. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Interesting. The one interesting part is for what they call ipsilateral cumulative use, which means a tumor found on the same side of the head as the cell phone, there is actually a confidence interval that seems to be significant. So, I'm not going to trash this study. I'm going to say it's not quite conclusive. It doesn't go out and say cell phones cause cancer, despite this fellow coming out and saying cell phones cause cancer. OK, moving on to number 6, was a 404. Let's just confirm. Wasn't able to get it an hour ago. Oh, it's back. OK, let's see what it does. I don't even know what this one's going to do. AUDIENCE: [INAUDIBLE] AUDIENCE: Potential [INAUDIBLE] AUDIENCE: Possible association with [INAUDIBLE] AUDIENCE: What's heavy mobile phone use? MICHAEL SHORT: Heavy mobile phone use, yeah. Well, they'll define that somewhere in the article. So, some of these studies, it's like OK, there's interesting viewpoints to be seen. They shouldn't be ignored just because we have this predetermined conclusion that cell phones don't cause cancer. It's important to go and actually look at the studies and decide for yourself. Let's get into the fun ones. Number 7. "A recent study on 790,000 middle aged women found that, "women who used cell phones for ten or more years were two-and-a-half times more likely," et cetera, et cetera. "Their risk increased with the number of years they used cell phones." Let's look at the study. OK, That's. Not the study, so we need to go find the study. And that's another news article about the study, we need to go find this study. Ah, finally. AUDIENCE: The study. MICHAEL SHORT: The study. AUDIENCE: The study. MICHAEL SHORT: Read the conclusion. AUDIENCE: What the-- I'm so bad. [LAUGHTER] AUDIENCE: I don't think the people writing these articles are actually like reading these-- MICHAEL SHORT: No, I don't think so either. AUDIENCE: They just look at the title and they're like, [INAUDIBLE] MICHAEL SHORT: So, the best thing that you can conclude about these sorts of people is that they're not reading the studies and reporting on them. If they are reading them and not getting it right, no, not everyone can parse the science. If they're reading them, understanding them, and cherry picking the facts in order to support their conclusion, that to me should be criminal. We do live in a country where there's freedom of speech. You're free to say whatever you want, as long as it's not hate speech of various kinds. It doesn't have to be right. You also don't have to listen. So just because you have freedom to talk, doesn't mean people have an obligation to listen. And this is the problem with a lot of this. So I think my-- yeah, my notes for this study was just kind of the F word. It was, how do you get the conclusion from this internet article, which wrote an article about an article about an article about a study, when the conclusion says, with an excellent sample size not associated. OK. We have like five or seven minutes left, so let's skip ahead. I had a fun one for number 12, cancer of the pituitary gland. Let me get rid of the other stuff. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Oh, does that look like a surprisingly familiar figure? AUDIENCE: Cool. MICHAEL SHORT: It's another article about the same study. Let's just confirm. AUDIENCE: [INAUDIBLE] articles about-- MICHAEL SHORT: Oh, look at that. AUDIENCE: [INAUDIBLE] papers. MICHAEL SHORT: That right there was the article written about the study, where the other link was an article, written about the article, written about the study. OK. What else? Next one. Let's just keep going in number order. Israeli study about thyroid cancer. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: OK. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: This appears to be a blog, so let's search for the word "Israel." AUDIENCE: [INAUDIBLE] MICHAEL SHORT: OK, but first the news article. So take a sec to parse some of this. "The incidence of thyroid cancer has been increasing rapidly in many countries, including the US, Canada, and Israel." I mean, one thing to say-- let's say, case control research on this topic is warranted. Sure. No one's going to refute a claim that, hey, maybe we should study something properly, right? Let's go a little further down. Let's try to find the actual study. Where is this study? Interesting. The main point of the study is that thyroid cancer and cell phone usage are going up at the same time. AUDIENCE: Wow! MICHAEL SHORT: This is the point where I like to say correlation does not imply causation, and hammer that point home by going to one of my favorite blogs, Spurious Correlations. You can find any data set that correlates with any other data set. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Let's look at some examples. US spending on science, space, and technology correlates with a 99.79% correlation of suicides by hanging, strangulation, and suffocation. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Correlated, yes. Causal, I don't think so. [INTERPOSING VOICES] MICHAEL SHORT: Yeah. Divorce rate in Maine correlates with per capita consumption of margarine. AUDIENCE: [LAUGHTER] Michelle, [INAUDIBLE] margarine. MICHAEL SHORT: You can find a link between anything and anything else if you just search the data long enough without searching for a mechanism or a reason. AUDIENCE: That's cool. Can we look at the age of Miss America below this? MICHAEL SHORT: Oh, OK. Age of Miss America correlates with murders by steam, hot vapors. [LAUGHTER] AUDIENCE: [LAUGHTER] MICHAEL SHORT: Clearly, we should ban the Miss America pageant or make them older. AUDIENCE: Yeah, [INAUDIBLE]. MICHAEL SHORT: Or the other way around, make them younger. Maybe this is why we have toddlers in tiaras, it's to stop murders by steam. Oh, my God. OK. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: So this is, again, the point where you have to ask yourself, what are the other confounding variables in this study? Why else could thyroid cancer be going up? Anyone? I can probably come up with like a hundred different possible reasons. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Any sort of other chemicals? Let's say, more industrial runoff, more urbanization, smog, inhalation, some amount, let's say, I don't know, iodine released from Chernobyl making its way through. Now, that would have had like a 30-day half-life. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah, that's also got to pretty much decay by now. Yeah, there could be any number of reasons. And just to say cell phones and thyroid cancer are correlated, is like saying this. What else? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: This I think might actually have something to-- AUDIENCE: [LAUGHTER] MICHAEL SHORT: There might be a link here. Revenue generated by arcades kids with computer science doctorates. Again, just a correlation. AUDIENCE: [INAUDIBLE] AUDIENCE: Sociology doctorates-- [LAUGHTER] MICHAEL SHORT: Ah, look at the amazing-- it's got all the same humps. And everything. All right, I think I've made the point. AUDIENCE: Actually, I like the margarine and the divorce rate one MICHAEL SHORT: Let's go on to some of the other studies, let's say, number 15. 11 of 29 cases of neuroepithelial tumors, cell phone users accounted for 11 of them." 11 of the 29 people in the study that got this type of tumor used cell phones. What's wrong here? AUDIENCE: Who doesn't use cell phones? People use cell phones. Everybody uses cell phones. They don't think about anything else that could have happened? MICHAEL SHORT: No, no. Here, I think the study is flawed. What is the worst part about this study? AUDIENCE: [INAUDIBLE] AUDIENCE: It's only 29 cases. AUDIENCE: It's 29 cases. MICHAEL SHORT: 29 cases, sample size. If you get 11 out of 29 and say half of the tumors we saw were attributed to cell phones, that is not a proper conclusion. AUDIENCE: How are you going to [INAUDIBLE] it to a cell phone [INAUDIBLE]? MICHAEL SHORT: Let's see, number 17. Ah, OK. Another Israeli study that talked about parotid gland cancers and salivary gland cancers. My note to this is read the last sentence. AUDIENCE: [LAUGHTER] [INAUDIBLE] AUDIENCE: Like, I'm sure there's other factors [INAUDIBLE] [INTERPOSING VOICES] AUDIENCE: They cause cancer. MICHAEL SHORT: The blog says, cause cancer. The data says, no causal association. So again, almost criminally ignorant. How many times did you have to miss the last sentence, the conclusion of the article, to pick the part that you want? AUDIENCE: But everything you read on the internet is true. You know, it's [? illegal. ?] MICHAEL SHORT: All I can say is everything that you read on the internet was written. That's the best I can say. Number 20, we don't even have to go to the study here. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Oh, boy. AUDIENCE: [INAUDIBLE] machine learning [INAUDIBLE].. MICHAEL SHORT: Let's check the study to make sure that the quote is actually correct, but before-- AUDIENCE: [INAUDIBLE] Oh, my God. MICHAEL SHORT: Four women. AUDIENCE: It's just the one. AUDIENCE: Study four women. Looks like it might [INAUDIBLE] MICHAEL SHORT: Yeah, by the prestigious publication, Hindawi, which sends me more emails than I read their articles. So let's look at the abstract. Of all four cases, they are a case studies, so striking similarity, how hard do you think it would be to find four women with a certain type of breast tumor? There's a lot of women in the world, right? AUDIENCE: Yes. MICHAEL SHORT: And breast cancer is one of the leading causes of cancer in women. It wouldn't be hard to cherry pick four people to get the same conclusion you want. Oh, and there's another correlation, out of 108 billion humans that have ever lived and have been exposed to ionizing radiation, all of them died at some point. AUDIENCE: [LAUGHTER] At some point. MICHAEL SHORT: At some point, yeah. every human that's ever lived has died. And every human that's ever lived had been exposed to ionizing radiation. AUDIENCE: [INAUDIBLE] AUDIENCE: It must be true. [INAUDIBLE] MICHAEL SHORT: Perfect correlation, no causation. Let's see, two more. I think we have time for two more. This is kind of fun. An eye cancer study. All right, let's just go-- "found elevated risk for exposure to radio frequency transmitting devices." AUDIENCE: Are these real studies? Don't the authors get mad that people are using their studies wrong? MICHAEL SHORT: I'm sure the authors do get mad, but what are you going to do about some person on the internet, right? You can send a nasty letter to the magazine, which might reject it as hate mail. OK, on the blog. AUDIENCE: [INAUDIBLE] very strong-- MICHAEL SHORT: What does it say? Elevated risk for exposure in the study. AUDIENCE: People only get excited by some crazy person. AUDIENCE: [INAUDIBLE] it's about. [INAUDIBLE] MICHAEL SHORT: I don't think I have to make my point anymore. We've gone through about half of them. I encourage the rest of you guys to go through the other half. And to the people, like this Lloyd Burrell, I say check your facts. What you're doing is criminally incompetent. With the way that people are misleading the public to get whatever pre-gone conclusions that they have from their emotions or their funding sources or whatever the reason to be, by misquoting facts you're absolutely misleading people and spreading false science. Because, to me, the most exciting moments in science don't end with the words, "I told you so," but start with the words, "that's interesting." So just because the studies that you find don't support your predetermined conclusions, doesn't mean you should reject them. It means that you might have to change your idea. So, on that note, I'd like to stop here. We'll come back on Thursday and go over the short and long-term biological effects of radiation and look at some more garbage science. Yeah? AUDIENCE: How do you feel about those wireless chargers they have now? It's like a conductive charger so it uses like a low-branch, strongish magnetic field. MICHAEL SHORT: Mm-hmm. AUDIENCE: And people are like, oh, my God. That's so scary. MICHAEL SHORT: I would just say go to the studies. It's very easy to say put a bunch of rats on a cell phone charger, turn it on, and see what happens. I mean, the data doesn't lie. The reason might be a little hard to figure out. Yeah. Yeah. So, I mean, another thing is, when people have a predetermined-- I know it's a little past 10:00, but no one's gotten up so I'll keep ranting. So a lot of this neo-environmentalism going on has the predetermined conclusion that only sources of power light on the Earth, like solar and wind, that are renewable and such, are the ways to go. And immediately dismiss nuclear as not part of the environmental solution, despite being part of the environmental solution. A large source of power that's very efficient and doesn't admit any CO2. It might surprise them to know that manufacturing wind turbines is a major source of radioactivity. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Anyone want to guess where? AUDIENCE: Rare-earth magnets. MICHAEL SHORT: Yes, thank you, rare-earth magnets. The major cause of wind turbine failure in the last decade has been the gearboxes breaking down. Because in order to extract power, you have to gear down those giant turbines by quite a bit. And those gears, 300-feet up in the air, tend to break down, they're hard to maintain. How do you fix it? Make stronger magnets. Put in rare-earth magnets that electromagnetically harvest the energy, instead of gearing it down and doing the same and you don't have mechanical things grinding. What are rare-earth magnets made out of? AUDIENCE: Rare-earths. MICHAEL SHORT: Rare-earths. Lanthanides, which happen to be found with actinides, thorium,r whatever actinium exists, radium, uranium, things with similar chemistry. What do you do when you extract the rare-earths that you need from the rare-earth ore? You ditch the remains, which are concentrated sources of these radioactive byproducts. Where do most radioactive-- I'm sorry, where do most rare-earth magnets come from? AUDIENCE: China. MICHAEL SHORT: China. How is China's record on environmental practices? AUDIENCE: Not [INAUDIBLE]. [INAUDIBLE] [INTERPOSING VOICES] MICHAEL SHORT: Spotty, at best. AUDIENCE: Questionable. MICHAEL SHORT: Yeah. So, again, one of those things where people say, oh, wind power has absolutely no effect on the environment. Check the radioactivity of making windmills. AUDIENCE: I want you to tell the Sierra. MICHAEL SHORT: I don't know if the Sierra Club would listen. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: I have heard murmurs or rumors of them coming around to the idea of nuclear power. There's an article that said they switched positions, then there was a counter article, followed a day later, that says, no, that was a rogue actor. They don't reflect the views of the Sierra Club. The problem is with all these neo-envrionmentalists and cell-phones-cause-cancer people and food-irradiation-is-evil people, you'll find them cherry picking data to support the conclusion that they already felt they wanted. And when confronted with overwhelming evidence to the contrary. They don't change their view. And that to me is the best thing about science. If you prove to me that you're wrong, I will say, thank you, not [INAUDIBLE].. AUDIENCE: [LAUGHTER] MICHAEL SHORT: So, there you go. All right, I'll see you guys on Tuesday. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 25_Review_of_All_Nuclear_Interactions_and_Problem_Set_7_Help.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: Well, as promised, we're gonna cover no new material today. We've just hit the end of part two of the course, which I think you'll agree with me, was probably the most technically challenging part. Who would disagree, I wonder? I didn't think so. So I wanted a bit of review and help you guys out with problem set seven. Since I've got a couple of fun problems, I think they're fun because they're actually fairly realistic, like the last one, will it blend, the AP 1000 edition. Well, you'll actually analyze if given the actual specification sheet of an AP1000 reactor, which is a modern reactor that's being built now, can you determine it's k-effective using the the two energy group approximation. And just to show you guys that this isn't crazy, I've got the AP1000 spec sheet up here. So who got a chance to take a look at p-set seven? Good, that's a lot of hands. All right, I highly recommend everyone look at it ahead of time. Because it is the doozy, and it will be the last doozy of this course. So part three of the course is lighter. Because most of your other courses are going to go nuts after Thanksgiving, as far as I know, right? Yeah, this one's not. So I'm doing my best to equalize your total course load this semester by making this course crazy now and lighten up when final season comes. AUDIENCE: The real MVP. MICHAEL SHORT: So I've got the spec sheet for the AP1000 right here. That actually goes over a description of what the core is like, the materials that are used, how many fuel rods there are, how many assemblies there are, basically what the core is made of. And what I want to do is jump to the end where they actually talk about the analytical techniques used in the core design. And what I want to show you is for the nuclear design of the core to get axial power distributions, look at that, two-group diffusion theory. The same stuff that we just learned right here. For axial power distribution control rod worth's one-dimensional two-group diffusion theory-- these are problems that you can solve with the stuff that we've done over the past week and a half. They have a little more complexity in that they keep all the spatial variance in there. So probably done with computers, however they make the same equations that you do. So Westinghouse is making the same assumptions that we made in this course. They have a lot more complexity in that they're-- you know, we just had cross sections or a macroscopic cross sections as a function of energy, but you may also think about it as a function of position and a function of temperature, since, as we started alluding to, on-- it wasn't today-- on Wednesday-- no, Tuesday. What day is it now? It's Thursday. Thank you, so it was definitely on Tuesday. We started talking about how cross sections change with temperature. And so really if we want to go crazy on that neutron transport equation, these cross sections will be functions of energy, temperature, and position. And so that's how this reactor would have actually been designed. But you're going to do a simpler approximation and take all of the information about the core in an AP1000, blend it, so homogenize it, figure out what the average atomic fractions of all the different things in it are, and calculate its k-effective, which I think is a pretty cool problem to do. I haven't seen it done in the courses here. But I want to see how it turns out. And you might find it's surprisingly different from one. Because we make a lot a lot of simplifications that actually matter quite a bit. But to help you parse this spec sheet, well, actually why don't I show you the problem first. And we'll go through a few of the different things that I've simplified the problem to make it not so tedious. But I also want to make sure you understand what to do. So the simple statement is calculate k-effective of the AP1000 using two-group diffusion theory, which means you've got a criticality condition from the two-group approximation. And I'd like to go over what that is right now. So let's write out, if we had a two group-- so we have two energy group equations for gains and losses of neutrons in the fast and thermal group. What would it look like? What are the gain terms? AUDIENCE: Sigma [INAUDIBLE] fast. MICHAEL SHORT: Yep. There's going to be some average nu times sigma fission fast times flux fast plus sigma fission thermal times flux thermal. Any other sources of neutrons into the fast group? I don't think so. What about sinks? How do neutrons leave the fast group? AUDIENCE: Absorption. MICHAEL SHORT: Yep, by absorption. How else? AUDIENCE: Scattering. MICHAEL SHORT: Yeah, scattering from the fast to the thermal group. How else? AUDIENCE: Leakage. MICHAEL SHORT: Leakage. So there'll be some diffusion constant fast times some fast geometric buckling squared. And, yeah, I think that's it. No, that needs a phi as well fast. OK, cool. What about the thermal group? What are the sources of thermal neutrons? AUDIENCE: Scattering. MICHAEL SHORT: Yep, scattering from the fast group. So this same term right here. Fast to thermal times phi fast. And what are the losses? AUDIENCE: Leakage and absorption. MICHAEL SHORT: Leakage and absorption, they look pretty familiar. Thermal phi, thermal plus D thermal Bg squared phi thermal. And that's a "t." OK. The hard part in this problem is going to be doing these averages. This is the part that we haven't explicitly done on the board and I want to show you. Yeah? AUDIENCE: Are those Ts or Fs, the first term of the second half. MICHAEL SHORT: This one? AUDIENCE: The top equation. MICHAEL SHORT: The top equation. AUDIENCE: Yeah, right there. MICHAEL SHORT: Uh, that should be a-- I'll make the Fs really curly. Curly. And those are straight Ts. These are curly Fs. OK, great, thank you. So the hard part is going to be doing cross section averages. We've just kind of written them as, hey, they're average cross sections. And an average cross section would look something like the integral from a minimum to a maximum of the cross section as a function of energy times the flux over the same integral without the cross section. This is where I want to point out something that I want you to remember for the rest of this course and the rest of your life. You don't have to do things analytically if you don't want to, unless it's explicitly stated that you have to. So part of this piece, that is to drill in the idea that it's the future, we have computers. And you can do numerical integration with data. Remember on one of the second problem sets, I showed you guys the web plot digitizer? How you can extract information from a printed graph? Well, a lot of times, what you'll already have is that data. And you'll have to then integrate that numerically in something as simple as Excel or as complicated as Matlab or something worse-- whatever tool you choose to use. I'll be using Excel because it's kind of the lowest common denominator. And so to show you guys you don't need to know any fancy software to actually solve these problems. So let's go through getting some of this data right now. Let's say that the cutoff between fast and thermal-- so if we were doing a fast cross section-- the cutoff would be at 1 eV. And our max would be, let's say, 10 MeV, which would be the top of the fission birth spectrum or the chi spectrum. 1 eV, and that would be 10 MeV. So all that's left is you need tabulated values for the cross sections and the fluxes. And then you can perform this numerical integral. I have given you tabulated values for the fluxes, wherever that is. Yep. Use the attached AP1000 tabulated neutron flux profile, which opens, that's awesome. So I've given you the approximate neutron flux in neutrons per centimeter squared per second as a function of neutron energy. And so you can tell for low neutron energy, there aren't any ultra-cold neutrons in this problem, though there are in the problem right before. Because, remember, the "we'll see?" AUDIENCE: Yeah. MICHAEL SHORT: Now is when we'll see. But if we scale down, we get down to the thermal regions where it's in the eV levels. You start to see pretty significant neutron fluxes in the realm of 10 to the 14 neutrons per centimeter squared per second. So that value of 10 to the 14 for flux that we've been using in all our previous problem sets-- it's because that's what we actually get. This flux spectrum and the picture of it that we have here was taken from the MIT reactor because it's representative of a pressurized water reactor. The only difference here is this is the spectrum from the fast flux trap. You don't usually see that fast to thermal ratio in a thermal reactor. But it's the closest spectrum that I was easily able to get my hands on. And it's not that unrepresentative. And the reaction rates for things aren't going to be that different. Because most of the reaction rates, the cross sections down here are in the, like, thousands of barns level. And the cross sections here are in the one-barn level. It's pretty much any fast cross section for anything it's about a barn. That's a good rule of thumb. So it's not going to change total reaction rates that much. And that's why I'm not worried about taking the fast flux spectrum from the MIT reactor and pretending like it's the AP1000's. It's not horribly that far off. And it's at the right order of magnitude, which is important. So that data, I give you. Let's talk about how to get this data-- the macroscopic cross sections. So if you remember, a macroscopic cross section is a microscopic cross section times a number density. And that's for one single isotope. If you have a mix of isotopes, then your total averaged cross section for all different types of atoms is going to be a sum-- I'll make it very different-- over all your possible isotopes of the atom fraction of that isotope times the number density of that isotope times the microscopic cross section of that reaction for that isotope. Or, I'm sorry, the atom fraction is included in the number density. Let's just simplify this a little bit. The total number density of that isotope-- that'll be in atoms per cubic centimeter-- times the cross section for that particular isotope, which is in centimeters squared, which leads you to a 1 over centimeter macroscopic cross section. And so let's say we were summing up something like stainless steel, which happened to be iron 18 chrome 10 nickel. Not only would you have to then get the number densities of iron, chrome, and nickel, but you have to look at which isotopes there are. So if we wanted to get the macroscopic cross section for stainless steel, we'd have to split this into the stable isotopes of iron, the stable isotopes of chrome, and the stable isotopes of nickel and then say it's this number density times a cross section plus this number density times a cross section, and so on and so on and so on. The easy way to get those number densities, if you take the number density of your stainless steel times the atom fraction of that isotope, that should give you the number density of that isotope. Does this make sense to everybody? Does anyone not know how to get a number density of a material from its basic chemical properties? OK. So a number density is in atoms per cubic centimeter. And usually we would have something like its density, which would be grams per cubic centimeter. So if we take density in grams per centimeter cubed and multiply by Avogadro's number and divide by-- I'll just put a divide-by symbol here-- the molar mass-- Avogadro's number is given in atoms per mole-- and divide by-- molar mass units are given in-- what is it-- grams per mole-- so that's going to be like moles per gram. The grams cancel, the moles cancel, and you get atoms per cubic centimeter. So to get a number density, you can take the density of the material-- you know, for stainless steel it's like 8 grams per cubic centimeter-- times Avogadro's number-- let's say 6 times 10 to the 23rd, and this would be like 8 grams per centimeter cubed or so-- divided by the molar mass or the average molar mass of stainless steel. I'm guessing that's around 56 grams per mole. And then you'll get a number density. And typical solid number densities tend to range from like 10 to the 26 to 10 to the 28. It'll be atoms per meter cubed. So it's going to be around, like, 21 to 23 atoms per centimeter cubed. So if you end up with something way outside those bounds, you've probably got some sort of unit or power error. So that'll help you check your math to make sure you get the number densities right. If you get the number densities right and you know the atom fractions, then you have the number density of each isotope in the number of atoms of that isotope per cubic centimeter. And then you multiply by your microscopic cross section and you get your macroscopic cross section. Yeah? AUDIENCE: Could you explain again what atom fraction refers to? MICHAEL SHORT: Yeah, atom fraction is a fraction between 0 and 1 of what proportion of the atoms in your material are that isotope. So again, if you're atom fractions are outside the bounds of 0 to 1, that's not physically significant. If your number densities are really far from those bounds, then they're probably not right-- unless you're talking about a gas or a neutron star. But solid matter tends to have approximately those number densities. Mm-hm? AUDIENCE: Since we're putting this reactor in a blender [INAUDIBLE]. MICHAEL SHORT: Mm-hm. AUDIENCE: So when we [INAUDIBLE] the atom fractions to do these calculations, how would we determine the fractions? MICHAEL SHORT: Good question. You'll determine those fractions from the AP1000 spec sheet. So in the AP1000 spec sheet, it tells you things like total weight of the fuel as uranium dioxide. And so you can go from total weight of the fuel to, let's say, molar or atom fraction of the fuel. If you know the weight of the fuel-- it's nice, they give you the weight of the fuel. They give you the weight of the clad. And let's see, which materials did we say you have to think about? It said you have to talk about four materials; the coolant or the moderator-- water; the fuel-- UO2; the clodding-- where you can assume pure zirconium, forget all the crazy zircaloids because that's just busy work; and structural materials-- assume pure iron. And so on this spec sheet, luckily, they just tell you the mass of the fuel. They tell you the mass of the clad. I do not believe they give you the mass of the water, but they do give you the volume of the core. And so you can figure out, if you've got a core and you subtract off the volume of the fuel and the cladding and the structural materials, all you're left with is volume of the water in the core. And that'll give you your total weight of the water. And once you have all the weights, then you can go to atom fractions. And then you've got all the information you need to get the macroscopic cross sections. Is that unclear to anybody? Cool. So we talked about how to get the N's. Let's show you how to get the sigmas. So there was a comment that came in that said please teach us how to use these databases. We just kind of throw them around. Well, I want to teach you how to use this database. Let's say we're going to get the cross sections for oxygen in uranium dioxide. That's nice and easy because there's only one stable isotope of oxygen you have to consider. It's oxygen 16. I highly recommend using the Java version of JANIS because it is a lot easier to use, less clunky, and more intuitive. So if you don't have Java on your machine-- first of all, it runs on everything, like phones, tablets, Linux, Mac, Windows whatever. And second of all, it'll just make your life easier. So a little time investment now will make the p-set take less. So once you have Java, it should just open cleanly. And it may show up with nothing. It may show up with something, depending on what you last looked at. In this case, it's looking at whatever last library I looked at. So let's pretend we're starting over. So sometimes you may just see this database as NEA. That's all the databases that come with JANIS. If we expand this, make sure that you go to incident neutron data. Because these are cross sections for neutron reactions that we want to go for. And there are a lot of databases in here. And in the problem set, specifically say to use the most recent ENDF or evaluated nuclear data file, just to make sure that we're all using the same data set. You'll notice that there are discrepancies between the data. So different groups have measured things with different uncertainties and different values. And so these cross sections aren't necessarily fundamental constants of nature. They're measurements of those constants with whatever uncertainty and error, which are two separate things, that could be in there. So let's open up the most recent ENDF library and click on cross sections. And now whatever you see here, the green squares are the elements that have tabulated cross sections. Just to make sure that we only have to consider oxygen 16, let's go to the table of nuclides. You'll never stop using this table. It's like the most useful thing in this class. And check to make sure that there is no other stable isotopes of oxygen that we have to worry about. As long as the internet is working. Huh. Did the immigrate to Korea website crash too? Interesting. Well, we'll let that load for a bit. And let's start looking at oxygen 16. So if you double click on the element of interest, it'll usually take a little while because its Java and it's loading cross sections. And then you can pick the nuclear reaction of choice from here. And unfortunately, I can't easily make this bigger. I will take a very quick detour and see if I can make these things a bit larger so you can see them. But if not, then whatever. No. No. Ah, oh, well. So let's say you wanted to get the elastic scattering cross section for oxygen. The first letter before the comma is going to be the incident particle. Notice that sometimes it says N and sometimes it says Z. N specifically means neutrons. Z means whatever incident particle you chose. So we know that here Z means neutrons coming in and elastic. That's elastic scattering. So we can then click on cross section. If you want to see what it looks like, you can check P for plot. And this will give you a logarithmic plot of the scattering cross section as a function of energy. You can see that it's pretty boring for oxygen. There aren't a lot of nucleons. There aren't a lot of different energy levels. There are not that many resonances or things going on. So I wouldn't be that upset if you just approximated the fast scattering cross section for argon as that value, whatever it is. But let's do this completely. Let's tabulate this. So if you click on T, you actually get a table of data. And you can choose how much data you export. By default, you get, like, thousands upon thousands of entries, which is just going to make your life horrible. So what you can do is either pick the original values starting at 1 eV. And we know we only have to go up to 10 MeV. So you can pick your bounds. And all of a sudden, there is only a few thousands worth of data. Or you can interpolate them. You can interpolate values either linearly or logarithmically-- I recommend logarithmic because this is such a large energy range-- and get maybe 5 values per decade. So between 1 and 10 MeV, you only need five numbers. That's still pretty intense. Oh, I didn't uncheck the original values. Ah, isn't that better? So now there's only 20 or 30 entries. It glosses over the resonances. It sure does. But are they that important? That's up to you guys to decide. You can try doing one of these calculations with the original values and with the interpolated values and see just how different they really are in two-group theory. And hint is, not very much. So then you can actually export that data. So either you can just copy-paste it. So I just highlighted, copied, start Excel, and in it comes. And right there is the data that you can start to use to do your numerical integration. And I don't want to give away how to do the numerical integration, though I kind of did, symbolically. I'd like you guys to figure out how to do this numerical integration mathematically, given that you can get the data now. So is there any step here that's unclear to anyone? Yeah? AUDIENCE: So are we going to have to go through, like, every single material, like, all their different cross sections to, basically, sum up and average them? MICHAEL SHORT: Yes, you are. Sounds horrible, isn't it? AUDIENCE: Yeah. MICHAEL SHORT: That's why I made some simplifications. So if you notice, we are simplifying the cladding as Zr. And we're simplifying stainless steel as pure iron. I don't want you guys to just do tons and tons of repetitive stuff. But I do want you to get roughly the right answer. AUDIENCE: For Uranium, do you want us to [INAUDIBLE] U-238 versus 235? MICHAEL SHORT: I want you to answer that question. So how would you consider the isotopes of uranium in this question? AUDIENCE: Shouldn't it be enriched uranium? MICHAEL SHORT: That's right. It's not the natural values, it's the enriched values. So it will say in the spec sheet-- AUDIENCE: I don't want to look up enriched uranium on Google. MICHAEL SHORT: You don't want to be on a list? You don't have to look up enriched uranium on Google. You can look it up on JANIS. So if you go to JANIS, right, are you looking for what the enrichment level is? AUDIENCE: Yeah. MICHAEL SHORT: Oh, that's on the AP1000 spec sheet. Where'd it go? If we search for enrichment, fuel enrichment, first cycle weight percent. You can either average these or just pretend it's five. Because that's a pretty typical enrichment level is 5% atomic fraction U-235. Or if you want to get really technical, you can average these noting how many fuel assemblies are in each of the regions. But I don't really care if you do that. AUDIENCE: What would be the other [INAUDIBLE]?? MICHAEL SHORT: U-238. Yeah, so there's only two isotopes of uranium, one of oxygen, however many of ion and zirconium-- hint there's not that many. And then there is H2O. And there's only one hydrogen and one oxygen. So it's not really that much busy work. It's just enough for you to get to the right answer. So I wouldn't even call it busy work because there's a point to it. So in that way, you can determine all of these cross sections. We give you these fluxes. These new values you can also get from JANIS. They're not labeled as new, but they are labeled as neutron multiplication factor. And they're usually way down here. And we probably need to find a physial isotope for that. So let's ditch oxygen for now, and go up to uranium. OK. So let's do U-235. It'll pull up the data. And then near the bottom, new bar total, neutron production. Check it out. For pretty much all energies, until you get to the fast region, it's the value we talked about-- 2.44. And then in the fast region, you suddenly can get more neutrons from fission. Why do you guys think that is? Well, what sort of additional things happen when your incident particle energy increases? Let's think back to binding energy and all the stuff at the beginning of the course. There's more different kinds of fission products that can be made. Because you're increasing the Q value of this reaction by increasing the initial kinetic energy. So there are more fission products that can be made. And some of them, let's say, give off more neutrons than others. So you'll be able to get your new bar total from this one near the bottom. And everything else will be near the top. They'll either be a fission cross section, an absorption cross section, a scattering cross section. How do you get your diffusion coefficients? They're not tabulated, but they're close. AUDIENCE: Yeah, isn't it based off the numbers based off the different cross sections we have [INAUDIBLE]?? MICHAEL SHORT: Yeah. Exactly. So let's render this nicely. The diffusion constant is 1 over 3 times the total cross section minus mu 0 scattering-- average, average, average. And this average cosine is about 2/3 times the atomic mass. So you can get these diffusion coefficients from tabulated data. The total cross section is found at the top, better known as-- well, it just says total. So you can get total cross sections here. The only one that might be a little tricky for you to find from notation is absorption. You're not going to see something labeled absorption in this database. However, you will see this gamma reaction. Or how else do you get it? If you take the total minus fission minus scattering, you're left with absorption-- if you don't count N2N reactions or really esoteric high-energy things. So if you can't find it, that's OK. Because you can calculate it. Because we know sigma absorption is sigma total minus sigma scattering minus sigma fission minus others that we don't care about. And since you're getting these anyway, tabulating sigma absorptions should be trivial. It's just an Excel subtraction. Yeah? AUDIENCE: I guess U-235 doesn't have it. But last night I was looking at Pu-239. And it has N in an inelastic. MICHAEL SHORT: Yeah. AUDIENCE: Would that be considered absorption? MICHAEL SHORT: Inelastic scattering is not considered absorption. AUDIENCE: OK. MICHAEL SHORT: So inelastic scattering means one neutron goes in, one neutron comes out, but at a very different energy level. So I guess we would also say minus sigma inelastic. And inelastic does happen for just about every other element. But the nice thing is, those don't turn on until about 1 MeV. So it's not going to matter much. But you can quantify how much it matters, and just check. So if you can make a justified assumption to say here's one of the calcs with and without inelastic scattering, and if they don't differ by much, then forget it, as long as you show me your math. So let's say we've got nus. We have cross sections. I give you fluxes. We have Ds. You can calculate buckling from the geometry of the reactor, which is given in the AP1000 spec sheet. The only other trick will be what's sigma scattering from fast to thermal? So you'll have to figure out what is the cross section, not just of scattering total, but the cross section or the probability that a fast neutron enters the thermal group. And I don't want to give that away either, but it's not terribly mathematical. Yeah? AUDIENCE: So if you only have one isotope for whatever you're analyzing, you don't have to go through the whole overall cross section. Can't you just do the number density times the number of cross sections? Yeah, you don't have to. MICHAEL SHORT: Hm, not quite. So even if you only have one isotope for each material, if you have more than one material, you've got to average those cross sections. Because this criticality criterion is for the entire reactor and all the stuff in it. That's why it's like in a blender. So even if iron has one isotope and zirconium has one isotope and uranium has one isotope, which wouldn't really be a reactor, then you'd still have to take atom fractions of those to get the total criticality condition. Speaking of which, I forgot to write where the K is. And since everything else here will be tabulated, you can solve for k-effective. So k-effective is the only variable unknown in this whole equation. Yeah? AUDIENCE: But I mean, like, for one of the other questions, it's, like, oh, if you had a perfectly spherical ball of plutonium-230i, I think it is, in that situation, could you just use the cross sections from JANIS where you don't have to account for any kind of other isotope [INAUDIBLE]? MICHAEL SHORT: That's right. So, yeah, let's go to the rest of the problem set, since you mentioned it. For one of the other questions, North Korean nuclear weapons. Why is it that just putting together a super critical mass of plutonium does not constitute an effective bomb? We're lucky for that, too, that making nuclear weapons is a lot harder than just getting nuclear material. And this is a lot of the reason why theirs have been 500 ton yields or kiloton yields-- duds. It's really, really, really, really hard. And I want you to think about what would you need to do to turn a super critical mass of nuclear material into an effective weapon. And why is it that it's so difficult to do? I'm actually really glad it's so difficult to do. It's one of the reasons that a lot of folks don't have them. But, yeah, if you only have one isotope like you do in this problem, you don't have to do atom fractions. You just take number density times microscopic cross section from JANIS and you get the macro cross sections. That's why it's one of the skill building problems where it shouldn't be that hard. So you'll have a criticality condition. You'll be able to get cross sections and number densities. And you'll be able to tell what that radius of the sphere is given the form of buckling for a sphere, which should be in your reading. Yeah? AUDIENCE: Where do we get our data for power manipulation? MICHAEL SHORT: That's being sent to you today. AUDIENCE: OK. MICHAEL SHORT: So you guys all did power manipulations from the reactor. I wanted to get them for Monday. The guys were busy doing an in-core experiment install. They've promised me the data today. So you guys will be able to take a look at it, and using the transient stuff that we talked about Tuesday, explain why it doesn't look linear feedback or intuitive. Yeah. So I'll be giving each of you that data today. We talked about nuclear weapons. This top one I want you to do on your own, because it's other repetitions of the intuitive criticality examples that we did on Tuesday. The last one we haven't talked about is the ultra-cold nuclear reactor or the "we'll see" problem. So here I want you to actually get a criticality condition for the case where you can have ultra-cold neutrons. Where your moderator is, let's say, liquid hydrogen, really, really cold, way below the thermal level, and you have to split your reactor into three energy groups-- fast, thermal, and ultra cold. And the trick to this here is somebody asked, can you ever have up-scattering? Why yes. If you have an ultra-cold moderator but you've got hot fuel, you can actually scatter up in energy where the surrounding atoms could be hotter than some of the neutrons hitting it and will impart energy to those neutrons and send it up the energy spectrum. Yeah? AUDIENCE: Yeah, about that, it says all fission neutrons are born fast and all delayed neutrons are born thermal. Can there be no up-scattering from ultra cold? MICHAEL SHORT: Well how do you get ultra cold neutrons? AUDIENCE: They would scatter down. MICHAEL SHORT: Mm-hm. But some of them might scatter up. AUDIENCE: OK, got it. MICHAEL SHORT: Yeah, they can scatter down by hitting something cold and scatter back up by getting knocked by a hot atom. AUDIENCE: OK. MICHAEL SHORT: Yeah. So it's all going to be in the formulation of the equations. For this problem, like 80% or 85% of the credit is, did you formulate the three-group equations correctly. So I want you to think about what are the actual sources and sinks in each case, what are the fractions of prompt and delayed neutrons, where did they go, and what terms matter where. So it's doing this, but for the case that we've given you here. Solving it is a lot of algebra, and therefore not a lot of credit. So is that clear to everybody? I figured this p-set was worth explaining and not just saying, have fun. You know, I'll see you on Piazza Sunday night. AUDIENCE: I'm there. MICHAEL SHORT: Yeah, that's why I would check it. Yeah. Cool. I've also got problem set four for everybody here. I want to mention a couple of quick things. Please, if you hand write your p-sets, which is fine, please make sure to scan them legibly and to write legibly. We can't give partial credit for things we can't read. And so this will be a lesson to some of you guys depending on who got what grade. There are times when you may have written stuff for partial credit, but we just honestly couldn't make it out. So please do make sure that your submissions are legible. And for things like these, these are handwritten problem sets, but they were scanned with either a scanner which does the correct contrast or writing them on one note, which apparently works pretty well, or apps like CamScanner. There's an app on your phone you can get that scans pieces of paper and automatically contrast enhances them. It knows the paper should be white and the writing should be black. And it also does it in color. So if you do color graphs, it recognizes that there are multiple colors and will take care of that for you. It only takes an additional minute, but it can give you double the points on a problem set because we can read stuff. So I'll give back p-set four now. We're working on five and six. The big delay in grading there was I went to Russia. And Russia doesn't have as much internet as we do, at least not where I was. And it was busy. So working on those solutions now. Yeah? AUDIENCE: For problem one, it says throwing quarters into the reactor actually happened. MICHAEL SHORT: Yep. AUDIENCE: When did that happen? MICHAEL SHORT: Oh, OK, yeah, good question. So problem 1, throwing quarters directly into the core like a wishing well. It used to be that back in the day, they would take folks on reactor tours to look down into the core. Because when the lid is off, you can see it. You can see the Cherenkov radiation, the nice blue light. You can actually see the fuel elements because the water is sufficient shielding for you. And the distance is sufficient shielding to keep you away from the gammas, not to mention the reactor is usually off when the lid is off. So it's not that hot. The problem is you can't watch what everyone's doing all the time. And somebody dropped a quarter into the reactor. And they were like, oh, it's a wishing well. Well, it took something like six dive robots to go into the core and fish it out. Because each of those robots lasts 10 minutes before the intense radiation fries it. And if you didn't find the quarter, you got to take him out, put down another one. And these are, like, radiation-hard, you know, narrow, whatever, diving robots. This story was relayed to me through someone that relayed it to them through whatever. So it's been through the telephone chain. But I do know that's one of the big reasons you can't look down in the core anymore. It's because folks abuse the privileges of tourists. Yeah. AUDIENCE: JANIS 4 just doesn't work for me. Like, I have my installed Java. It's like up to date and stuff. But it just doesn't work. MICHAEL SHORT: Interesting. AUDIENCE: What do you think I should do? MICHAEL SHORT: Then you could use the web version. AUDIENCE: OK. MICHAEL SHORT: Which is just for the browser and no plugins required. Yeah. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Cool. Any other questions about the p-set that we didn't cover together? Those looking a little more doable? AUDIENCE: [EXHALES] |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 21_Neutron_Transport.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: Hey guys, hope you enjoyed the brief break from the heavy technical stuff. Because we're going to get right back into it and develop the neutron transport equation today, the one that you see on everybody's t-shirts here in the department. So I think multiple years folks have used this equation on the back of t-shirts just be like, we're awesome. And we do difficult math. Well, this is what you're going to start to do it. In fact, it's big enough and hard enough that we're going to spend all day today developing it, like actually writing out the terms of the equation and understanding what it actually means. Before, on Thursday and Friday, we're going to reduce it down to a much simpler equation, something that you can actually solve and do some simple reactor calculations with. We started off the whole idea of the neutron transport equation as a way to track some population of neutrons. Let's see, I'm going to have our variable list up here. What I'll probably do is on Thursday and Friday I'll just have it back up on the screens so that we don't have to write it twice. But there's going to be a lot of variables in this equation. I'm going to do my best, again, to make the difference between V and nu very obvious, and anything else like that. But the goal is to track some population of neutrons, at some position, at some energy, traveling in some direction omega, as a function of time. And the 3D representation of what we're looking at here is let's say we had some small volume element, which we'll call that our dV. That's got normal vectors sticking out of it. We'll call those n-hats in all directions. And inside there, let's say if this is our energy scale, we're tracking the population of neutrons that occupies some small energy group dE, and is also traveling in some small direction that we designate as d-omega. So that's the goal of this whole equation is to track the number of neutrons at any given position. So let's call this distance, or the vector r. In this little volume, traveling in some direction omega, with some infinitesimally small energy group d. That's going to be the goal of the whole thing. And what we'll do is write this to say the change in the population of neutrons at a given distance, energy, angle, and time, over time is just going to be a sum of gain and loss terms. And what I think we'll take all day today to do is to figure out what are the actual physical things that neutrons can do in and out of this volume, and how do we turn those into math, something that we can abstract and solve? There's a couple other terms that we're going to put up here. We'll say that the flux of neutrons, which is usually the variable that we actually track, is just the velocity times the neutron population. And also let's define some angularly independent terms. Because in the end we've been talking about what's the probability of some neutron or particle interacting with some electron going out at some angle. But as we're interested in how many neutrons are there in the reactor, we usually don't care in which direction they're traveling. So the first simplification that we will do is get rid of any sort of angular dependence, getting rid of two of the seven variables that we're dealing with here. So all these variables right here will be dependent on angle. And all these variables right here will be angularly independent. So there'll be some corresponding capital N, or number of neutrons as a function of r, E, and t. We'll call this Flux. We'll call this Number. There's going to be a number of cross sections that we need to worry about. So we'll refer to little sigma as a function of energy, as our micro-cross-section, and big sigma of E as a macroscopic cross-section. Then you want to remember the relation between these two. AUDIENCE: Solid angle? MICHAEL SHORT: The solid angle, not quite. That's, let's see. There's a difference between-- and so what these physically mean is little sigma means the probability of interaction with one particle. And this is just the total probability of interaction with all the particles that may be there. So yeah, Chris? AUDIENCE: Number density? MICHAEL SHORT: There's number density. Already we have another variable conflict. How do we want to resolve this? Let's see. We'll have to change the symbol somehow. Let's make it cursive. Don't know what else to do. I don't want to give it a number other than n since we're talking about neutrons, or that right here it's going to be number density. And in the end, we're worried about some sort of reaction rate, which is always going to equal some flux, or let's just stick with some angularly dependent flux, that r, E omega t, times some cross-section as a function of energy. And it's these reaction rates that are the rates of gains and losses of neutrons out of this volume, out of this little angle, out of this energy group, and out of that space, or into that volume energy group and space. So let's see, other terms that we'll want to define include nu, like last time. We'll call this neutron multiplication. In other words, this is the number of neutrons made on average during each fission event. And we give it energy dependence because as we saw on the Janis libraries on Friday, I think it was. What's today, Tuesday? I don't even know anymore. I think as we saw on Friday, that depends on energy for the higher energy levels. And there's also going to be some Kai spectrum, or some neutron birth spectrum, which tells you the average energy at which neutrons are born from fission. So regardless of what energy goes in to cause the fission, there's some probability distribution of a neutron being born at a certain energy. And it looks something like this, where that's about 1 MeV. That's about 10 MeV. And that average right there is around 2 MeV. And so it's important to note that neutrons are born at different energies. Because we want to track every single possible dE throughout this control volume, which we'll also call a reactor. Let's see, what other terms will we need to know? The different types of cross sections, or the different interactions that neutrons can have with matter. What are some of the ones that we had talked about? What can neutrons do when they run into stuff? AUDIENCE: Scatter. MICHAEL SHORT: They can scatter. So there's going to be some scattering cross-section. And when they scatter, the important part here is they're going to change in energy. What else can they do? Yeah? AUDIENCE: Absorbed. MICHAEL SHORT: They can be absorbed. So we'll have some sigma absorption. What are some of the various things that can happen when a neutron is absorbed? AUDIENCE: Fission. MICHAEL SHORT: Yeah, so one of them is fission. What are some of the other ones? AUDIENCE: Capture. MICHAEL SHORT: Yep, capture. What were some of the ones that we talked about during the Chadwick paper? AUDIENCE: Neutron [INAUDIBLE]. MICHAEL SHORT: Yep, so there can be some-- we'll call it n,in, which means one neutron goes in, i neutrons come out, so 1 to i neutrons, sure. Anything else? Encompassed in absorption? Well when we refer to scatter here, what type of scattering are we talking about? AUDIENCE: Compton's scatter? MICHAEL SHORT: Compton's for photons. It's OK. Was it elastic or inelastic scattering? AUDIENCE: Elastic. MICHAEL SHORT: Elastic scattering. So another thing you could call an absorption event, depending on what bin you put things in, is inelastic scattering, which is that kind of-- we call it scattering because one neutron goes in, one neutron comes out. But in reality, you have a compound nucleus forming and a neutron emitted from a different energy level. So it doesn't follow the simple ballistic laws of, and kinematic laws of inelastic scattering. What else can neutrons do? Now we're getting into the real esoteric stuff. But I want to see if you guys have any idea. Did you know that neutrons can decay? A low neutron is actually not a stable particle. If you look up on the Kyrie table of nuclides, it's got a half life of 12 minutes. So if you happen to be able to have neutrons in a bottle or something, which we actually can do. There's centers for ultra-cold neutrons and atoms. There's one at North Carolina State where they actually cool down neutrons to cryogenic temperatures to the point where they can actually confine them. They only live on average 12 minutes. And then there would also be what we call a neutron-neutron interactions. There is a finite, non-zero but very small probability that neutrons can hit other neutrons. But the mean-free path for these is on the order of 10 to the 8th centimeters. So this is not something we have to consider. But it's interesting to know that yes, neutrons can run into other neutrons. And these sorts of things have been measured. We won't have to worry about this. We won't have to worry about neutron decay. But it's interesting to note that a low neutron is not a stable particle. It will spontaneously undergo beta decay, into a proton and an electron. Pretty neat, huh? Anyway, if we sum up all these possible interactions, we have one other cross-section, which we're going to call the total cross-section, the probability of absolutely any interaction occurring at all. Because any sort of interaction of that neutron is going to cause removal from this group of energy position, angle, location, whatever. Whether it's absorption, or fission, or elastic scattering, or inelastic scattering, any sort of event-- except for forward scattering, which means nothing happens-- is going to result in this neutron either leaving the volume. So it might scatter out of our little volume. Or it might change direction, scatter out of our d-omega. Or it will lose some energy, or gain some energy, in some cases, leaving our little dE, which is what we're trying to track. Because we're actually tracking what's the population of neutrons in this little dE, in this direction, in this position, at this time. And supposedly if we know this term fully, we can solve for all the neutrons everywhere, anywhere in the reactor with full information. So what we'll spend the rest of today doing is figuring out what are all the possible gain and loss terms. So let's start just putting them out physically, or in words. And then we'll put them to math. So what are some of the ways in which neutrons can enter our little group of volume, angle, and energy? How are neutrons created? Yeah, Luke? AUDIENCE: From fission, or a neutron emission. MICHAEL SHORT: From fission, yeah, so that's one big source. So we'll call this a gains. This is a losses. And you said a neutron source. Can you be more specific? AUDIENCE: A neutron emission, like [INAUDIBLE].. MICHAEL SHORT: OK, so we'll say n,in reactions, right? OK, cool. How else can we gain neutrons? AUDIENCE: Fusion? MICHAEL SHORT: Fusion? OK. That is true. Although fusion reactors don't really operate on the principle of neutron criticality, or neutron balance. So this discussion for now is going to be limited to fission reactors. But yeah, good point. Fusion does make neutrons. What else? Yeah? AUDIENCE: They could enter from one of the adjacent volume? MICHAEL SHORT: Yeah, they could come from somewhere else, right? Let's just call that an external source. In the books and in your reading, you'll just see them treat this external source as some variable s of r, E, omega, t. So you'll just see this treated as s, a source, with no further explanation. It's like, oh, math says that there could be external sources. But I want to tell you where they really come from. Most reactors nowadays don't just start up when you throw a bunch of uranium into a pool and pull out the control rods. You actually have to stick in-- if this is your little reactor right here-- you actually have to stick in a little piece of californium-- I think the isotope is 252-- as what we call a kickstarter source. So californium is made mostly in the HFIR, or the High Flux Isotope Reactor, at the Oakridge National Lab in Tennessee, where they have a really, really high power reactor. It's 85 megawatts. It's about that big around and this tall. It's really, really small. For reference, that's about the size of the MIT reactor, except our reactor's 6 megawatts. Theirs is 85 megawatts. And it's designed to be an incredibly high flux, to go by neutron capture, and neutron capture reactions, to build up californium 252, which is spontaneously giving off neutrons like crazy. And this right here, that's your external source. And this helps get reactors going. Because you can either very slowly wait for the fission reaction to build up in a controlled manner. Or you can give it a kick in the pants and get it going. This HFIR reactor is pretty cool. Like I said, it's 85 megawatts. And it's about as dense as it can get. The fuel is actually made by explosively bonding sheets of uranium in a certain sort of semi-cylindrical configuration. And it produces so much decay heat in so little space that if it were to lose cooling, the reactor would melt in 8 seconds. You usually have days or so before that happens in a conventional reactor because the power density just isn't that high. So you can actually see down to the tank that contains HFIR if you go for a tour at Oakridge National Lab. And it's way down below this gigantic like, not quite Olympic, but getting there sized pool of water, just to make sure that there is adequate cooling for this thing. It's intense. But that's just a notice that these external sources, these are real things that we use in power reactors to get them going. What are some other ways that one could make neutrons, or that neutrons could enter into our energy group? And the silence is expected because this is usually the hardest part of developing this equation. And I want to introduce it. Yeah, Luke? AUDIENCE: [INAUDIBLE] scattering, too. MICHAEL SHORT: That's exactly it. They can scatter in. So when we develop this neutron transport equation, we're not just tracking the neutrons in this little energy group dE, direction, d-omega, and volume dV. You actually have to know what's the population of neutrons in every single group. Because you might have a neutron at a higher energy level that undergoes scattering from some different energy, E-prime into our energy group. So continuing with our gigantic list of variables, we're going to call E is, we'll say r energy. And this vector omega is our direction, the one that we're tracking. And E-prime is going to be some other energy. And omega-vector-prime is going to be some coming from some other direction. And like Luke said, this is what we would refer to as in-scattering, which means some neutron comes, that was going in a different direction, that did have a different energy, and has now entered into the single group that we're tracking. Eventually we're going to integrate over all energies to track all energy groups. So that's where we're going. And there's one more term that I want to introduce right now. It's what's called the scattering kernel. Don't ask me why it's called kernel. But this is just the terminology I want you guys to get used to. And there's going to be some sort of probability function where a neutron starts off at a different energy, E-prime, and in a different direction, omega-prime. And it enters into our group energy E and direction omega. Right now we'll leave it as a highly general function. What we're going to find later is there's just some sort of simple line to it. If you guys remember, if some neutron starts off, let's see, probability of entering into some energy group. If you notice, if you remember from last time, the neutron, when it undergoes any sort of scattering reaction, can end up with any energy between its original energy for the case of theta equals 0, and this parameter, alpha energy, for the case theta equals pi, where alpha is A minus 1, over A plus 1 squared, where A is the atomic mass. You guys remember this from back in the Q equation days, when we were finding out what's the probability that a neutron coming in with energy E ends up at any energy E-prime? Actually I'll just write this as the scattering kernel. What it ends up looking like, in most cases, is just a flat line. There's an equal probability of the neutron ending up anywhere between energy E and anywhere between energy alpha-E. It's actually a pretty simple function. It's just a constant value here and 0 everywhere else. What that means is that if, let's say, a neutron hits a uranium atom, there is no way in hell that it can transfer all of its energy to a uranium atom because of conservation of energy and momentum, like we've been harping on for kind of this whole class. What's the only time that this alpha-E could actually extend all the way to 0? What case would that be? AUDIENCE: [INAUDIBLE]. AUDIENCE: You're hitting another neutron. MICHAEL SHORT: You're hitting another neutron, which, as we said, is a very rare event. That is true. Or what else could you be hitting? AUDIENCE: A proton? MICHAEL SHORT: A proton, hydrogen. That's right. So it can only be, let's say you can only have the probability of the neutron ending up with any energy for the case of hydrogen. Incidentally, this is why we fill light water reactors with hydrogen. The goal is to get the neutrons as slow as possible as quick as possible. Interesting sentence to say there, right? We want the neutrons to be as low energy as possible as rapidly as possible. And the best way to do that is to fill the reactor with hydrogen because then any collision could, in theory, get the neutron down to zero energy. Without water, or something with the same mass as a neutron, like another neutron, there is no way that that neutron can slow down by very much. So even though we're going to keep it as this generalized function, note that in reality it's this pretty simple function. It changes a little bit, as there can be a forward scattering bias for some neutron reactions. But we are not going to deal with that this year. You will deal with that next year in 22.05. So I've been saying a lot, oh, well, we're not going to go into this topic because you're going to see it in 22.02, which is quantum. Now I switched gears to say, we're not going to go into the way this function changes because you'll see it next year in 22.05, which is neutron physics. But for now I want you to be prepared for 22.05. So we'll put on in-scattering as one of our gains. There's a last one I want to make you aware of. We very briefly touched upon it. But I wouldn't be surprised if no one remembers because it was for like 10 seconds. It's what's called photo fission. What this means is you have some reaction that would, in comes a gamma, and out goes fission. This actually does start to happen around 3 or 4 MeV, for isotopes like uranium 235. And in our reactor, whatever shape we decide it is, there are tons of gamma rays flying about in all directions at very high energy. Does anyone remember where they come from? Anyone remember the fission timeline that we drew on Friday? So what we said there was right away, let's say fission happens. And almost instantly, you get your fission product one and fission product two. And they move around for a little while. And then some of them will emit some neutrons. And then some of them will start to emit gamma rays, betas, and whatever else they're going to do until they finally lose all their kinetic energy and stop in the surrounding fuel, creating the heat that actually powers the turbine and make steam to make electricity. And so it's from these gammas, as well as any of the gammas from the decay products of the fission products that lead to a huge flux of gamma rays firing out from all sides in the reactor. That's one of the main things that you actually have to shield in a nuclear reactor. Since we talked about all sorts of different shielding, and all sorts of ways that you have to shield things, you know from seeing the MIT reactor-- which you all did-- that there's like six feet of lead and concrete shielding around the reactor. It's not there to shield the alphas and the betas, because those don't really make it out of the water. It's not there to shield the soft X-rays that betas make from bremsstrahlung. It's also not there to shield the neutrons because the neutrons don't really get out. They bounce around, or get absorbed in the water, or the fuel, the reflector. It's there to shield the high-energy gamma rays. Because the only thing that stops high energy gamma rays is lots of mass in between the source and you. So we know there's tons of gammas all about. So let's say there's also going to be some gamma ray flux. There'll be some gamma ray energy. And there'll be some cross-section for photo fission as a function of the incoming gamma ray energy spectrum. Now I'm adding terms to the ones that you'll see in the reading because drawing them out in math is actually fairly instructive. They all follow the same pattern. So instead of just showing you one of each and saying memorize, we'll develop a whole lot of these. And you'll find out that they all actually look almost the same. Can anyone else think of any possible gains of neutrons? Where else could they come from? Yeah? AUDIENCE: Neutron birth spectrum, is that? MICHAEL SHORT: So the neutron birth spectrum is included in fission. So our nu is in there. Our chi of E is in there. And that's a nu of E. That's all accounted for in the fission term. And we'll see how we put that together to math. And if no one else has any ideas, that's good. Because neither do I. Now what about the lost terms? There aren't too many of these, as long as you lump them correctly. So what sort of ways could neutrons be lost from our energy group? Yep? AUDIENCE: Scatter out. MICHAEL SHORT: Scatter out, yep. They can undergo any kind of scattering reaction. And they will probably change direction and energy. What else? Well, we've got to list up on the board right there, right? Capture, fission, because in order to undergo a vision you actually have to lose a neutron, and so on, and so on, and so on. What I want to do to simplify things is this. It's a lot simpler just to track the total cross-section, the probability of any interaction at all whatsoever, because any interaction will cause the neutron to either change energy and angle, or disappear, even if it makes some other ones. So we can simplify this to just the total cross-section term. And there's only one other way that neutrons can leave our energy angle and volume group. What would that be? So any reaction takes care of energy and angle. What about volume? How do neutrons leave the control volume? It's simpler than it may sound. They just go. They just move. The neutrons are always moving, right? We'll call that leakage. Because every neutron's got a speed, like we showed up here, where the flux of neutrons, the number of neutrons moving through some surface per second, is just their velocity times the number that are there. For there to be a neutron flux there has to be a velocity, which means the neutrons are moving. So the neutrons, even without undergoing any reaction, could just move out of our control volume. And then they're gone. And that's all there is for gain and loss terms. So let's see if we can do this all on one board. I want to start putting this table right here into math that we'll be able to abstract, simplify, and then solve, but not today, not solve today. So if we want to track the change in the number of neutrons as a function of time, let's start writing down the gain terms. So how do we describe the number of neutrons produced from fission? What sort of terms do we have to include? And Jared started kicking us off, so what would you say? AUDIENCE: Neutron birth? MICHAEL SHORT: Yep, so the neutron birth spectrum, there's going to be some probability that a neutron is born in our energy group E. Because we're tracking how many neutrons are in our little dE energy group. What else matters in terms of fission? AUDIENCE: Number of fissions? MICHAEL SHORT: Yep, number of fissions. So if we want to write number of fissions, we have to write that as a reaction rate. So let's take those two terms right there. So we'll have sigma fission. In this case, we're going to write E-prime times flux of r E-prime, omega-prime, t. Why did I write E and omega prime here? Just from a physical reason. Yeah? AUDIENCE: So you're going to be coming from another energy group. MICHAEL SHORT: Precisely. That's right. So the neutrons are going to be produced from some other energy group. For example, the fission birth spectrum right here starts out-- where did it go? I knew I drew it somewhere-- at one MeV. But most of the neutrons that cause fission to happen are way down below 1 eV. So it's different energy neutrons that cause neutrons to be born in our energy group. That's why we're using E-prime and not E. It's some other energy group. And so we also have to account for all possible other energy groups. So if we want to write this, right, we'll say this could be as low as 0 eV, to our maximum energy. And there's going to be some d-omega-prime, dE-prime, dV. We'll also have to account for all possible angles and integrate over our entire volume. It's going to look ugly quick, but it's all going to be understandable. So what this says is that we have to account for the reaction rate of fission from all other energy neutrons inside our volume from other energies and other angles, and account for every other possible energy. Because they can all make fission happen. What else is missing in terms of describing the number of neutrons made from fission? AUDIENCE: Neutron multiplication. MICHAEL SHORT: Yep, there's the number of neutrons made per fission. So we have to put in our neutron multiplication factor. And in this case, normalize-- I think someone had mentioned solid angle-- we normalize over all possible angles with an over 4 pi in there. And this right here is the fission term. So this tells us the number of neutrons gained in terms of a reaction rate, times the number of neutrons for each of those reactions, times the probability that there just happened to be born in the energy group that we're tracking. So is there any term here that's unclear to folks? Yeah? AUDIENCE: So what's the lower bound on the first integral? MICHAEL SHORT: On the first integral? That 0 electron volts. AUDIENCE: Oh, OK. MICHAEL SHORT: Because supposedly you could have a neutron at 0 eV, which has a very high cross-section. So it should probably induce fission. In reality, there might be some actual minimum temperature. But there is a non-zero probability that you could have a neutron at rest. It's just not very large. AUDIENCE: And the top bound? MICHAEL SHORT: The top bound as E max, whatever your maximum neutron energy is. This is usually around 10 MeV, for most fission reactors. That E max is going to be this point right here, the highest energy at which neutrons can be born by any process. And so this term right here is going to serve as a template for all the other gain and loss terms. So I think this is the hardest one that we had to develop from the beginning. Now let's develop the term for, let's just go with external sources, pretty easy. There's going to be some source making neutrons. It's something that you would just impose. Like say, all right, I have a californium source giving off this many neutrons. Well then you know how many neutrons it's giving off. And that one's done. That's easy. So we've done fission. We've done external. Now that we've done fission let's tackle photo fission. So what would be photo fission cross-section look like? It's going to look awfully similar. So what sort of things do you need to know if it's a fission reaction? Well, what do we have up here? Just start reading things off. I heard a murmur. What was that? AUDIENCE: [INAUDIBLE] flux. MICHAEL SHORT: Yeah, so you're going to have to have some flux. In this case, we want to know what's the flux of gamma rays because photo fission starts off with a gamma, then ends up with a fission. And it's also going to be in our volume. It's going to matter what the energy of those gammas is. They'll all be traveling in some direction at some time. What else do we need? AUDIENCE: [INAUDIBLE] the 4 pi [INAUDIBLE].. MICHAEL SHORT: Yeah, if we're going to be going over all angles, you need the 4 pi. What else? Do we have a reaction rate yet? AUDIENCE: No. MICHAEL SHORT: No, well what's missing? AUDIENCE: The cross-section. MICHAEL SHORT: That's right. We need a cross-section. And in this case, instead of just fission, or neutron fission, we'll put in the gamma fission cross-section. And so now we have a reaction rate for a single reaction. We've got to integrate over all of our gamma ray energies, over all angles, over our volume. What else is missing besides our d-omega, dE gamma, d-Volume. It should look awfully similar because the terms are basically exactly the same, with just different cross sections and energies in there. So what's missing between the photo fission and the neutron fission one? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Sure, there might be some different birth spectrum for gammas. And there might be some different multiplication factor for gammas between neutron fission and photo fission. But these terms should look exactly the same because in every case you're looking at some reaction rate between either the neutrons and fission or the gamma and fission. And you need to know at what energy they're born, how many are made, all the angles, and integrate overall the variables that we care about. And this is part of why I'm adding these extra terms because they end up looking all exactly the same. It's the integral of a reaction rate times some stuff. That's all that every single one of these terms is going to be. So we've got photo fission. Now let's tackle in-scattering. So how do we represent scattering? In the same way that we represented fission, what do we start with inside the integral? AUDIENCE: Reaction rate? MICHAEL SHORT: Reaction rate, yes. So we're going to have some scattering cross-section. And if it's in the scattering, it means it's coming from a different energy, hence the prime. We'll have our flux. And here's where we're going to bring in our scattering kernel. Because there's some probability that the neutron scatters in from a different group. And then we'll have our d-omega, dE-prime, dV. Is this complete yet? We've now accounted for one other energy, E-prime. Now how do we account for all possible other energies scattering into our energy group? AUDIENCE: Integrals. MICHAEL SHORT: Yep, again, same integrals. We integrate over all possible energies, over all possible angles, and over our volume. Hopefully these terms are looking very similar. In every case it's a volume, angle, energy integral of a reaction rate. And all that's saying is there's some rate that these reactions are occurring, which is either a gain rate or a loss rate. We integrate over whatever volume, energy, and angle we're tracking. And that's all there is to it. So we've got in-scattering. Now let's tackle the n,in reactions. There's only going to be one little difference. But I want you guys to tell me what sort of things are going to be the same as all the other terms that we have here. So what do we start off with inside the integral? AUDIENCE: Reaction rate. MICHAEL SHORT: A reaction rate. And how do we write that? AUDIENCE: [INAUDIBLE]. MICHAEL SHORT: Yep, a cross-section, there's going to be some cross-section, for, let's call it n,in reaction as a function of energy, times a flux. There'll be our normal integrated over all angles, all energies, and our volume. So we have a reaction rate. What do we have to then integrate that reaction right over to get all the neutrons in? AUDIENCE: [INAUDIBLE] same things. MICHAEL SHORT: Same things as everything else, exactly. Integrate over all possible energies, integrate over all possible angles, integrate over our volume, looks quite similar. The only thing we haven't dealt with is this i term right here. Because there can be, there are actually n,2n reactions, n,3n, n,4n, and so on. So there are probabilities that, let's say you, in goes 1 neutron, out comes 3 neutrons. But no fission actually happened. You just blast a few of them out. So I think all we'd really have to do is sum over i equals 1. Oh, I'm sorry, i equals 2, because a neutron going in and then the same neutron going out, that's just scattering. And what would be the maximum? Probably 4. Because the probability of an increasing i or more neutrons coming out gets lower and lower as you go. In fact, these reactions don't even turn on until-- the n,2n reaction turns on and around like 1 MeV. This one turns on and around like 5 MeV. This one turns on at like 12 MeV. I was just looking up these cross-sections before class. So if you have a reaction that doesn't happen beyond your highest neutron energy, you probably don't need to worry about it. But the reason I had us write all these extra equations-- and I think that the t-shirt for this department needs some updating to include these extra terms-- is because they're all the same term. It is in every case, it's an integral over all of our stuff of a reaction rate, d-stuff, times a multiplier. Every single term in this equation follows the exact same pattern. So what I hope, and I would expect out of you guys, is that if I were to give you this table of possible reactions, you would be able to recreate this neutron transport equation using this template to know that every single reaction is just multiplier, times integral of stuff of a reaction rate, d-stuff, where the reaction rate is just a cross-section times a flux. That's all there is to it. Not bad when you see that everything follows the same pattern, right? That's the basis behind most of the hideous equations that you see in all of physics everywhere, is if there are additive or subtractive terms, they'd better be in the same units. And so they're going to follow some sort of a similar template. Not too scary when you think of it that way. So let's now come up with the loss terms. I should have planned these boards better. Keep these ones here so we keep a template. And we'll have a minus, well, how do we write the anything reaction using this template? How many neutrons undergo a reaction when one neutron undergoes a reaction? Yeah, 1. So our multiplier is 1. We don't have to worry about it. We have our integral of stuff. So we'll have to integrate over all possible volumes, angles, and energy. And what's on the inside? AUDIENCE: [INAUDIBLE] total cross-section. MICHAEL SHORT: Yep, total cross-section as a function of energy times the flux, d-stuff to save time. So don't worry, even though the boards are laid out funny, on the pictures of the blackboard that we'll put on the Stellar site, I'll Photoshop these and arrange them so that they're all in sequence. And you can see everything. And then there's the last term to account for. That's the leakage term. This one is a little different. It's the only one that's a little different. And in this case, we're going to say that our little volume element also has a surface to it. And if the neutrons leave the surface, then they leave the volume. So in this case, we'll have a surface integral of our neutron flux, say our neutron flux dS. Because there's no reaction happening when neutrons just move, right. They just go. And so, well, we'd also have to multiply by our normal vector. Because every flux is going to have a certain number of neutrons moving in a certain direction. Let's say we were tracking the flow of neutrons through this surface right here. And if we had a flux going in exactly this direction, through this surface, and this is the normal vector, in this case, flux, which is a vector dotted with the normal vector, is just the flux. Which is to say that if the flux and the normal vector are aligned in the same way, then every neutron going through the surface is tracked as going through the surface. To take the opposite example, what about the situation where you have a surface here and you have a mono-directional flux of neutrons in this direction. And that is your surface normal. What does the flux dotted with the surface normal vector equal? 0, it's just a dot product between the direction that your neutrons are moving and the normal vector saying, does it go out of the surface at all? So for these two limiting cases, in this case, the fluxes just let's say, what is it, the number of neutrons leaving the surface is the flux. In this case, no neutrons leave the surface because they're not actually going through the surface. It's a good time to mention, again, that these units of flux are in neutrons per centimeter squared per second, which is to say the number of particles traveling through this area in centimeter squared every second. I know we've gone over it before, but I want you to keep these units in mind. Because now they actually have a little more physical significance. And that's why we have this flux times normal vector dS. That describes the number of neutrons that get through the surface. The last thing we'll do, because everything else is a volume integral, we want this to be a volume integral because we're going to simplify this in terms of getting rid of all the volume stuff. We're going to use what's called the divergence theorem. I hear some snickering. Because I remember this is probably something where you were told in 1801 or 1802, this exists. Use it in a few problems. Moving on. That sound about right? This is when you actually use it. So the divergence theorem says that the integral of some variable F dS, through some volume element of surface, is the same as the volume integral of-- how does this go-- del dot F dV. And this is going to be quite important because one, it gives us a volume integral. So this is, it will be a volume integral of our del dot flux dV, so now everything's in the same units. And if we were to say forget about our little volume element. Let's just assume an infinite reactor. Every single volume integral in term just instantly disappears. Because we wrote these equations to be identical for any dV anywhere inside this reactor. If the reactor is then infinite, then all of those volume terms disappear. And that's the first simplification that we'll make in the next class. But right here on these five boards, we've developed the neutron transport equation, which is the absolute, most general, highest escalated form of how do you track neutrons through any volume, any direction, any energy, at any time. And we'll spend Thursday and Friday simplifying this to something that we can solve. The other reason that we use this divergence theorem is because we're going to make an approximation. This crazy looking thing right here, we will make an approximation called the diffusion approximation where we assume that neutrons are like a gas that just diffuse away from each other. And that's going to make solving this really, really easy. It's going to go from some second order differential or differential integral equation to just the equation that you can solve with algebra. Yep? AUDIENCE: Do you need the dE d-omega for the last term? MICHAEL SHORT: Probably, yeah, over all E, over all omega. And that flux is going to be of r, E omega, t. Absolutely. Just to make sure, everything is in the same units, every term has a fairly similar template. The only difference is leakage, there's no reaction here. Every single other term constitutes a reaction. And they all follow this template. So I will stop here because it is five of. See if anyone has any quick questions on what you've got here. I'll make sure to get all of this on the board images so you guys can take a look at it. And I'll projected up on the screen so that we can make some simplifications based on what we see here on Thursday. Yeah? AUDIENCE: What is the neutron birth spectrum? MICHAEL SHORT: The neutron birth spectrum says that if you have any old fission event, what's the probability of those neutrons being born at different energies? What this says is that they're born between 1 and 10 MeV, with a peak at around 2 MeV. But if you want to track the number of neutrons in every energy group, you need to know where they begin. Good question. So if any of the terms here are unclear what they physically mean, because that's what I'm most interested in you guys knowing, please do ask either on Piazza, on email, on Thursday. Yeah? AUDIENCE: What's the difference between the big N and the little n again? MICHAEL SHORT: The big N and the little n, which one? The cursive one, or this one? AUDIENCE: The little n up top there and then the non-cursive one. MICHAEL SHORT: OK, the little n and the non-cursive one. The little n is the number of neutrons in a volume, at a certain energy, going in a direction, at a certain time. Big N right here is just number density, number of atoms per centimeter cubed. Cursive n is the number of neutrons at an energy, in a volume. We don't care where they're going. And the reason I write these terms up here is we are going to switch from lowercase to capital, or angularly-dependent to angularly-independent by making a simple approximation to say, we don't care what direction they're going. We just care if they're there. But in real complex neutron physics problems, like the one solved at the computational reactor physics group, you need to know all the angles. And you need to know the probability or the cross-section that a neutron coming in at this angle leaves at that angle and imparts a certain energy. Because they're all different. For the purposes of this class, I just want you to know that they exist. And the first thing we will do is simplify them away. But this way, you'll be fully prepared for 22.05 and a lifetime of reactor physics, if you so choose. Who here is done a year op in the computational reactor physics group? Just one, OK. I recommend more. They tend to be the biggest group in the department. They've got like 20 grad students and probably more year ops than that. So try it out. It's what makes us us, us nukes, right, is neutrons and tracking them to ridiculous proportions. OK, definitely want to let you guys go it's one of. So I'll see you all on Thursday. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 19_Uses_of_Photon_and_Ion_Nuclear_Interactions_Characterization_Techniques.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: Anyway, today is going to be a lot lighter than the past few days, which have been heavy on theory and new stuff. And I want to focus today on what can you do with the photon and ion interactions with matter. So we're going to go through a whole bunch of different analytical and materials characterization techniques that use the stuff that we've been learning and see what you can actually do. And I'll be drawing from examples from the open literature, from textbooks, and from my own work. So stuff I was doing here on my PhD thesis is actually a direct result of what do we do here in 22.01. So a quick review just to get it all on the board of what we've been looking at. So I don't hit anyone on the way in. We talked about different photon interactions, which include the photoelectric effect. Let's say this will be the energy of the scattered whatever, and this will be its cross section. We talked about Compton scattering. We talked about pair production. For the photoelectric effect, the energy of the photoelectron comes off like the energy of the gamma ray minus some very small difference, the binding energy of the electron. Let's just call it Eb. And this effect starts when you hit what's called the work function. I'm just going to put this all up there, so when we explain the analytical techniques, we can point to different bits of this and explain why we use these different things. The cross-section, I made sure to keep this handy, so I don't want to lose it. Strongly proportional with z. So the cross-section comes out of another line. What was it proportional to? Oh yeah, this is nuts. It's like z to the fifth over energy to the 7/2, which says that for higher z materials, the photoelectron yield is much, much stronger, and it's way more likely that way lower energy. So you can imagine if you wanted to use this in an analytical technique, and you want to study which photoelectrons come from which elements, you might think to use a low energy photon to excite them, not a high energy photon, because like we had done a couple of times before, if we draw our energy versus major cross-section range, we had a graph that looks something like this, where this was the photoelectric effect. This was Compton scattering. This is pair production. And so by knowing what energy-- oh, I'm sorry. That's supposed to be z. And this would give you the dominant process that each the combination of energy and z. So if you know what energy photons you've got and what you're looking for, well, there you go. Let's see. What was the energy of the Compton electron? Remember the wavelength formula. It was like alpha 1 minus cosine theta over-- let's see. Another 1 minus cosine theta. In came the gamma ray energy. What was the part that came beforehand? That's why I have this here because I don't want to write anything wrong. It's good to have it all up there at once. 1. Yeah. That's all I was missing. Cool. And the cross-section for Compton scattering scaled something like z over energy, something pretty simple, not nearly as strong as pair production or photoelectric effect, so you can think Compton scattering happens much more dominantly at low z or the other two don't really happen that much at low z, whichever way you want to think of it. And for pair production, you get a whole mess of stuff. You get positrons coming out. You get a bunch of 511 keV gamma rays and all sorts of other things you can detect. And the cross-section, this one's got the funny scaling term. This one, yeah. It's like z squared log. Energy over mec squared, so some z squared kind of dependence. So let's keep those up for now. Let's get the electron ones in. AUDIENCE: [INAUDIBLE] mez squared? MICHAEL SHORT: Was it z squared? Let me check. No, that's a c. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah. Yeah, just make sure that's clearly a c squared. So now let's call it charged particle, or just more generally ion electron interactions. Since these are more fresh in our head, what are the three ways in which charged particles can interact with matter that we talked about? Just rattle off any one of them. AUDIENCE: Bremsstrahlung? MICHAEL SHORT: Yeah, Bremsstrahlung or radiative. What else? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Is what? AUDIENCE: Ionization. MICHAEL SHORT: Ionization. Which we'll call inelastic collisions. And? AUDIENCE: Rutherford scattering. MICHAEL SHORT: Yep, Rutherford scattering. Which are kind of elastic or hard sphere collisions. And if we had to make kind of a table of when do we care about which effect, let's say this was an ion or electron, scattering off of either electrons or nuclei, in either elastic or inelastic ways. First of all, when do we actually care about elastic scattering off of electrons, which would be hard sphere collisions off of electrons? To help get you going, in an elastic collision, the maximum energy transfer can be this formula gamma times the incoming energy, where gamma is 4 times the incoming mass times the mass of whatever you're hitting over n plus big m squared. Let's say if one of these masses was mass of an electron. What is gamma approximately equal for most cases? Well, let's say this was like electrons scattering off of protons or vice versa. How much energy could an electron transfer to a proton in an elastic collision? Basically zero. The only time which this actually matters is if it's an electron hitting another electron, in which case you can have pretty significant energy transfer. So I'd say for elastic collisions off of electrons, you only care about those for other electrons. And I'm going to put in low energy electrons. Why do we only care about them for low energy electrons? Or in other words, what are the other methods of stopping power or interaction-- yeah, Chris. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Exactly. Yep. We already saw that Bremsstrahlung the radiated power scales with something like z squared over m squared. So with a really small mass and a really high z and also a higher energy, you end up radiating most of that power away as Bremsstrahlung. And there's not much of a chance of elastic collision. So we only care about low energy electrons when it comes to elastic collisions with electrons. For inelastic collisions with electrons, well, that's the hollow cylinder derivation that we had done from before where you have some particle with a mass m and a charge little ze, getting slightly deflected by feeling the pull-- depending on what charge it is, it could be towards or away-- of that electron away from some impact parameter B. So we care about this pretty much all the time. Electrons and ions or stripped bare nuclei actually matter in this case. For elastic collisions off of nuclei, this is what Rutherford scattering is. It's a simple hard simple hard sphere collisions, so this matters pretty much all the time. What about inelastic collisions with nuclei? What does an inelastic collision actually mean with a nucleus? So fusion could be one of them, but let's go more generally. We have some nuclear reaction, where it's the old thing that I keep drawing all the time of some little nucleus striking a large nucleus. In an inelastic collision, this is the case we haven't considered yet, but I want to show you what actually happens. In an inelastic collision, these two nuclei join together to form what's called a compound nucleus or CN, at which point it breaks apart in some other way. So there might be some different small particle and some different large particle coming off. But in an inelastic collision, it's almost like the incoming particle is absorbed and something else is readmitted. It could be that same particle at a different energy, and it could be a different energy altogether. So yeah, I'd say fusion is an example. It's kicked off by an inelastic collision, because you've got to have some sort of absorption event of the small nucleus by the big nucleus. And then, maybe if it fuses and just stays that way, it releases a ton of its binding energy, well, that's pretty cool. So these actually do matter, but not for all energies in all cases. So let's go back to the Janis database of cross-sections to see when inelastic scattering actually matters. Bring us back to normal size. And we'll look at some of the cross-sections to see when do we actually care about inelastic scattering? So we haven't selected a database yet. Let's say we're firing protons at things. And pick a database that actually has some elements listed. Not a lot. But iron, that works. So we can look at the difference between the elastic scattering cross-section and the anything cross-section. So the red curve here-- can I make it thicker easily? Probably. Yeah, I can make it thicker pretty easily. Easier to see. Plots. Wait. That's not what I wanted. I'm not going to mess around with this anymore. Do you guys see the two lines? OK, so this is the elastic scattering cross-section. Kind of funny to see it negative. But then there's the anything cross-section which picks up at around 3 MeV or so. And it usually takes somewhere between 1 and 10 MeV for inelastic scattering to quote unquote turn on, and that's because you have to be able to excite the nucleus to some next energy level. So sending in a proton at like 0.01 MeV is not going to excite any of the internal particles to a higher energy level. So if you want to see some pretty interesting cases, let's go to incident neutron data where we have a ton of this data. And I'll show you some examples. We've got lots more data for neutrons. So now we can look at some of these cross-sections. Like this z n prime. Let's take a look at what that looks like. That means a neutron comes in. Different neutron comes out. Notice that the scale only starts at 862 keV. So let's make it something else. Oh my. Look at that. Nothing going on until you reach almost 1 MeV, which means, hey, inelastic scattering doesn't really turn on until that. So I would say that this can matter, but for higher energy collisions. So yeah, it matters pretty much all the time. But higher energy collisions. And there's actually-- yeah. AUDIENCE: What does it say in the top left box? MICHAEL SHORT: Only for low energy electrons. That's the sort of compound reason that I and Chris said, one, is that you can't transfer much mass in an elastic collision, or I'm sorry, much energy in an elastic collision unless the masses are close enough to each other. And two, at higher energies, the electron radiates Bremsstrahlung much, much, much faster. As we saw at around 10 MeV, Bremsstrahlung and inelastic scattering give about equal contributions to the stopping power for high z materials like lead. So once you're down and let's say like the keV range, yeah, electron elastic collisions might matter. So we talked about those three. Now I think we can launch into the analytical technique. So for the rest of the lecture today, it's all going to be what can you do with the stuff that we've been learning since the first exam. I know it hasn't been long since, but we've actually learned a ton. And I want to show you what's actually possible. And this is not going to be with slides. It's all live from websites that I'd love for you guys to be able to follow along with or check out at home. So I'm going to show you an awesome resource through the MIT libraries and how to get there. If you go to vera.mit.edu, there's a great tool called the ASM Handbook. You can see I've been there before. There's the ASM handbooks online, and this is kind of that's everything to know about material science, metallurgy, and analytical techniques, absolutely everything from corrosion to fractography, to characterization, to structure of materials to where you can find every single alloy, to binary phase diagrams of how things mix, and we're going to head to one of these handbooks. Number nine or 10, materials characterization, because with the stuff that's on this board, you can understand how most materials characterization techniques work. And I want to show you a few of them. One of which-- no, two of which, we're going to demo out next Friday's recitation. So I think I told you guys in the syllabus and probably in person that we're going to try out some scanning electron microscopy and some energy dispersive X-ray or EDC analysis. So with the X-ray transition stuff you've learned, you actually know how to elementally analyze different materials. And with scanning electron microscope, you can get some idea about how electrons can make images much better than optical images. So let's head to electron optical methods, scanning electron microscopy, and show you what one of these things actually looks like. Let's take a look at an SEM, or scanning electron microscope. So up at the top, there is a device called the electron gun. For now, just imagine it's a source of electrons, but in a few minutes, we'll actually explain how it works using the principle of thermionic emission, which we talked about last Friday. You've got some electronic lenses, some focusing coils, that caused this beam to get focused further and further. So let's say you had this electron filament giving off electrons in all directions. When you see boxes with x's like this on an electron optics diagram, it usually means this is like a focusing coil of some sort. So that will cause the electrons to get bent and focused. There'll be another set of coils that focuses them further and some scanning coils that actually raster or xy scan this beam across the surface of a material. And so in this way, what you're actually doing is putting the electron beam at one part of your material and then with another detector, let's call it a secondary electron detector. Looking at the electrons produced from collisions with those other electrons that then get detected here, and the number of electrons produced at a point gives you the brightness of the image. That's kind of as simple as it is despite how complicated this diagram looks. There's an electron source. There's coils that scan it back and forth. Like has anyone ever seen the old cathode ray tube, CRT televisions? There's going to come a day when that answer is no. And I'm kind of worried for that, because that's the day I'll officially become old. But for now, everyone's seen a CRT, and the way that actually works is there's an electron gun that fires and scans left to right and up to down our rasters and produces that electron image. In an SEM, you use an electron gun, kind of similar, and then collect the electrons generated in the specimen, what's called secondary electrons. And the number that you see gives you the brightness of the image. The cool thing is this actually allows you to look at both secondary electron contrast and topology of a sample. So let's say this was your secondary electron detector. And you had an electron beam scanning across your sample to some of those peaks and valleys. And I'll probably draw one right here for a good reason. Let's say the electrons hit right here, and you send out a wave of secondary electrons. The material partly determines how many electrons come off, but also, so does the geometry. There will usually be a little cage with some sort of a positive voltage on it to attract those secondary electrons. And some of them will curve into the detector and become part of your signal, but some of them won't. Meanwhile, if you have this beam right here producing secondary electrons, pretty much all of them go slamming into your detector. And that's what actually allows the electron microscope to get topology. That's why images in the SEM look fairly 3D. So I want to show you a few examples from my own boredom when I was doing a lot of science. There we go. I have a whole gallery of SEM images when I was supposed to be doing something better. Oh no. 404. My website's broken. Oh yeah. This is also what you do when you're bored, right? Make your own 404 page. My SEM galleries are dead. Well, that's OK. I have other images ready to show you guys. So this is a neat-- this is a paper that I published out of my PhD work that shows the real difference between optical and electron microscopy. Part of it is the limit of your resolution depends on the wavelength or de Broglie wavelength of the thing you're using to make the image. So an optical microscope, in this case, you can't get better resolution than about half a micron, because even the blue wavelengths of light are getting down into about the 450 nanometer regime. And it's very difficult without interference techniques or other fancy things to beat that diffraction limit, to beat the sort of wavelength limit of optical microscopy. So this is a 500x optical microscope image, and you can see these little fingers-- in this case, it's liquid lead bismuth penetrating into a stainless steel that we were doing corrosion experiments on. And that's as good as the image can get in an optical microscope. Switch down to an SEM, and then all of a sudden the picture becomes much, much, much more clear. You can start to see things-- the best SEM we have in our lab has an ultimate resolution of about 1 nanometer. Now, resolution is kind of a funny thing. It's neat to tell you what that means. It doesn't mean that if you have a pattern of lines that are exactly one nanometer thick, that you will see them as lines 1 nanometer thick. It means that if you then plot, let's say, your signal or your brightness versus x, you'll have some barely distinguishable and fuzzy lines, just enough for you to say those are two optically distinct features. So what you'll actually see in a 1 nanometer microscope is maybe something like this. That's technically resolved at the level of 1 nanometer. So the best you can do for crisp objects in this thing is about 20 nanometers. Not bad. It's like something that's a few thousand atoms on a side. Pretty cool. And so what you can see in here is liquid lead bismuth penetrating into this stainless steel, and you notice a few different things. This image was taken in backscatter electron mode. Back scattering is-- we've talked about this before. When you have a scattering event where theta equals pi, we call that backscatter. Let's kind of split this into regular and backscatter. For a backscattering, the cross-section for this is proportional to z squared, another one of those extremely z dependent cross-sections, which means that the larger the z, the higher the atomic number the more backscatter contrast you get, so if you want to figure out where the little lead whiskers are penetrating into the stainless steel, since lead has a z of like 82, and iron has a z of like 26, it shows up like night and day. Do you have a question, Julia? OK. Yeah, so this is something we'll actually be able to do. So for the two folks I asked to bring in samples, if you want to bring in something with very different elements in it, we should be able to see it in backscatter contrast very, very clearly. And in the image of the SEM, I'll go back to that-- which one of these pages is it? Notice here that there is a backscatter detector. So it will detect which of those electrons scatter back at almost 180 degrees. And that's at about z squared proportionality, super useful tool, because if you want to see, for example, where the circuit board traces are, and you want to look at aluminum versus oxygen contrast, that'll help you really well. If you want to see where is lead penetrating into stainless steel, it shines up clear as day, which is pretty fun. The other thing the electrons will do when they enter into a material is excite lots of things. So anything from X-rays to Auger electrons. So now I'd like to bring up Auger electron spectroscopy. Electron or X-ray spectroscopic methods. Auger electron spectroscopy, it's not just a thing to trip you up on the exam or a little minutia from radioactive decay. It's actually incredibly useful, because of where the Auger electrons are generated and what they tell you about the material. So as a quick refresher, normally you could have, let's say, if a photon comes in and injects a photo electron as another electron comes to fill that hole, either an X-ray will be emitted or an Auger electron will be emitted. And it's those Auger electrons, they're outer binding energy electrons. They have very low binding energy, which means-- let's see. I keep running out of room. You know, I'm not going to draw it. I'm going to show you, because I know there is a diagram of what I want to show you here. If you want to see where the Auger electrons are actually produced in the material-- here we go. Since they're such low energy, the only Auger electrons that actually get out would be in this outer few mono layers. In fact, there's some Auger electron energies that can only get out one or two atomic mono layers from a material. So it's one of the best surface analysis techniques that we have. You can both use Auger electrons to make an electron image, like any other SEM. And you can collect them and measure their energy to figure out which elements they came from. And this kind of teardrop shape is a-- one, it's a great synthesis of all the information you need to know in the SEM that we'll see on Friday. And two, its why people screw up SEM analysis a lot. A lot of the X-ray excitation happens down here. Why do you think that the X-ray exaltation would happen near the end of the path of the electron beam from what you know about stopping power? Or, if I asked you to draw a graph of let's say energy versus stopping power for ionization, what would it look like? Yeah. AUDIENCE: It comes up like a peak at low energy. MICHAEL SHORT: Yeah. AUDIENCE: And then drops back down. MICHAEL SHORT: Yeah. AUDIENCE: And as the energy goes out, it sort of flattens out. MICHAEL SHORT: Yep, sort of flattens out, and then eventually starts picking up again but not very much. So as the energy of whatever you're going into-- I'm sorry, whatever particle you're sending in it gets lower, it's stopping power increases, and you have a much higher chance of this ionization happening, especially in the case of electrons. They usually come in at between 10 and 40 kV. And so near the end of their range is where they produce a lot of the X-rays. Now there's a lot of other nuances to say, well, which X-rays were produced here and what elements are they from. Let's say you had the same material here or here. Fewer X-rays will get out of the bottom region than they will from the top region. So if you happen to be analyzing something that has a gradient in composition or a change in composition from the top to the bottom, you might be like, oh, well, I have a few nanometers of oxygen on silicon. Why aren't I seeing any oxygen X-rays? Because you're probably generating them down here. That's one of those things to note. So sometimes you'll see and elect an elemental map of things that shows X-rays of certain element coming from somewhere, and you can't see it at all in the image. That's because they might be underneath what you can see in the image. It's kind of tricky like that. I'll show you some examples of what those maps look like also from this paper. So from the electron image, we sort of concluded, all right, lead is probably penetrating into the stainless steel. How do we know for sure? You can make EDX or elemental dispersive-- I'm sorry, energy dispersive X-ray maps by focusing the electron beam at one point, collecting all the different X-rays and then moving from one point to another to see when do you get characteristic X-rays from each of those elements. So you can actually prove to say, yes, those little fingers are indeed bismuth and lead, and you can see that, in this case, where the lead in bismuth is, the iron is not. But the curious thing we found is that in this whole band right here, most of the chromium disappeared. So it turns out that the corrosion mechanism was chromium dissolution. And we would not have been able to know that without this EDX mapping, and without understanding how the EDX maps are made from the electrons interacting with matter and producing characteristic X-rays, wouldn't have been able to prove this. Yet another example where the basic stuff you're learning in 22.01 is the theoretical underpinning of the techniques that we use all the time in material science, which I thought was pretty cool. I've got more examples of that that are even more striking, because I let it collect for a little longer. You can actually see right here that where the bismuth is, the iron isn't, but the iron's not dissolving. The chromium is. It's just the lead and bismuth are kind of sucking the chromium out of the metal right there, and that's what making the stainless steel less stainless. It's pretty neat. Then on to EDX analysis, what sort of information are we looking at every one of these pixels? I have a couple of other example X-ray spectra. So now we're in a position to understand why one of these X-ray spectra looks the way it does. In this case, we're firing electrons at a material. Let's see. Where is our material we're firing at? Right here. So we're firing in electrons. And in some cases, let's say we had an iron atom. That electron can eject another electron. And then one of those other electrons will fall down in that shell, giving off a characteristic X-ray. In this case, since it's from the third to the second shell, that would be what we call an L X-ray or a something to level two transition. And every element has got its characteristic X-ray transitions, like we saw on the NIST X-ray transition database. And since we know what all of those are, we know where to expect them. So we know where we expect to see chromium's X-rays and iron's X-rays. Gold's kind of an interesting one. There's two things about doing analysis with gold. A lot of times you have to coat your materials in gold to boost their secondary electron contrast. But also gold, I think it's its L line or M line, I forget which one, is the same as argon's K line. And we have an expression in the electron microscopy world, the probability of finding argon in your sample decreases with experience. Takes a second to parse that. Chances are, if you're looking at a solid material. You don't have argon in it. But there are extra lines that overlap with each other, like the L line for gold and the K line for argon are at pretty much the same energy, certainly similar enough that it's within the resolution or like full width of half maximum of these two peaks. So remember how we were analyzing the uncertainty of our banana spectra with the FWHM or full width at half maximum? Same thing here, and you can really see that the energy resolution of this detector is not the best. So if you see a peak, it might be due to two or more peaks crowding in that right there. And with a lot of correction factors that I won't get into, you can then use this information to integrate the area under these peaks and get elemental analysis. You can say how much chromium or how much iron and silicon is in one of these samples. What's this stuff here on the bottom? Anyone tell me? That continuum of observed X-ray energies? AUDIENCE: Compton. MICHAEL SHORT: Compton scattering is a photon effect, so that would be-- if this were a photon analysis spectrum, then you would see something like this but of a different shape. You'd have that Compton bowl with an edge. But this is, in this case, electrons interacting with material. What do you think is causing that broad background? Well, what are the different ways in which electrons can interact with matter? You're seeing the ionizations here. We're not really seeing Rutherford scattering. What's left? AUDIENCE: Bremsstrahlung? MICHAEL SHORT: Bremsstrahlung. Yep, that's exactly it. So the observed Bremsstrahlung spectrum follows this sort of characteristic peak early and then tail off curve. What's the actual Bremsstrahlung spectrum that we're not sensing? What would it look like? Always running out of room. If this is what we're actually observing, let's say we have a few peeks, that would be intensity, and that would be energy, what's really going on physically that we're not seeing? Yeah. AUDIENCE: Isn't it sort of like almost like exponential decay. So it starts out with very high intensity and goes down. MICHAEL SHORT: That's right. You actually should get more low energy Bremsstrahlung. One of some of the reasons you don't is that the lower energy, those X-rays come out, the more they get self-absorbed in the material in the few gas molecules in the SEM and in the window of the detector. So just because this is what you see doesn't mean this is what's actually going on in your material. If we think back then to where the electrons and X-rays are generated, the X-rays that are generated down here, the lower energy ones are going to be shielded more. And this kind of messes up your elemental analysis, because if the X-rays produced here, proportionally more of the low energy ones will get out than the ones produced here. So as you change your-- as you change your electron beam energy, you might see your elemental composition appear to change when you know it's really not. And that's because where the X-rays are being generated change, and proportionately, more of the low energy ones get self shielded by your material. So you actually have to correct for that and input your beam energy into the EDX analyzer so it knows how to correct for this. But with the understanding I've been giving you guys in this class you can understand like well, why can you get screwed up? Why do we have to have all these correction factors? I think it's pretty neat. Then let's get on to some of the other methods, like X-ray photo electron spectroscopy or XPS. This is something I hinted to a little bit earlier that actually uses the photoelectric effect, because it's a photo electron spectroscopy method. This one's incredibly useful because not only does it tell you what elements are there, but in what binding state they are, because photo electron spectrometers can be incredibly precise. The energy equation should look pretty familiar to you it's whatever photo electron you get is the gamma ray energy that comes in minus the binding energy of that electron and the work function. And so you can very, very simply figure out for a given element and a given electron shell what photo electron energy's do you expect. So you can collect them. So I'll show you another example from this paper, where we started to do that. We wanted to answer the question, what are the oxides forming on the stainless steel when lead corrodes it? And just telling you which elements are there and in what proportion doesn't give the answer, because what if there's multiple phases of each oxide? Like, for example, iron can take forms like FeO, Fe2O3, Fe3O4, and this FeO can actually have a range of stoichiometries. So how do you know? You don't know. There could be like scores of phases of this iron oxide. The question is how you know which ones are there. The photo electrons will tell you. So what you can do first is fire monochromatic X-rays, so single energy X-rays, in this case from aluminum at your material and see which photo electrons of which energy come off. And you can tell which atomic shell they're from and which elements they should be to a very high precision. In this case, this is done to 100 milli MeV or 0.1 MeV precision, 0.1 eV precision. So we can tell not only what elements are there, but what shells they came from. Then you can get even crazier. You can scan very slowly over one of these peaks with 0.001 eV precision and start to see something pretty cool. If you look at the carbon 1s electrons, you can see that there are actually three of them only a couple eV apart, and this corresponds to different binding states of molecules with that carbon. For an even more subtle example, but ended up being incredibly important for us, here's one of the chromium 2p shell peaks. You can actually see there's three of them superimposed give that funny looking peak shape right there. What that actually tells us is that there's chromium in three different binding states in that oxide. And the ones we figured out must be there, we saw the ones corresponding to Cr2O3, FeCr2O4, and I forget which other one, but we have them tabulated. There we go. Oh wow. Fe 2.4 Cr 0.64, known crystallographic phases of these oxides. So you can look at the peaks found to have resolution of like 100th of an electron volt, compared to reference values taken on pure compounds and materials to figure out what actual oxides do you have. That can help tell you things like how protective are they, how fast are they going to grow, and are they going to be a problem if you want to use this new stainless steel that we developed in a lead bismuth reactor. The biggest problem with lead bismuth reactors is lead corrodes like everything. And so the whole point of my graduate studies was design an alloy that doesn't corrode and lead and make ab alloy composite out of it. But you can't prove that it works unless you not only know how fast it corrodes, but how it corrodes, which oxides form, and in what order. That's the last part I haven't told you about yet is what order. So I want to switch to another technique called secondary ion mass spectroscopy or SIMS. In this case, you start off with firing ions at a material, which will then eject or sputter away secondary ions. In this case, this process of sputtering-- let's say this is your material here. You send in something like oxygen ions, which might be like O2 minus with a mass of 32, and then you blast off or sputter away. A few atoms at a time from that surface, and they'll come off the various masses and charges, and in this case, the sputtering could be due to Rutherford scattering, because you might directly ballistically slam and ion out of the surface. Then every one of these ions has a different mass and a different charge. And by sending it through a mass spectrometer, something that separates these materials by their mass to charge ratio, because the higher the charge, the more deflected an ion will be. But the higher the mass, the less deflected it will be. That should sound really familiar. In our idea here where how these ionization collisions happen, if you remember the higher the charge, the stronger the Coulomb force-- that q1, q2 over r squared. I think there was a constant in there. So the higher the charge, the higher those q's, and the stronger the Coulomb forces. But the larger the masses, the less momentum it can impart. And so the deflection will be weaker. Exact same thing's happening here. And you can separate out atoms not only by their charge and their mass, but specifically by their isotope. So this is one of those ways that you can figure out and make an isotopic map of a material in three dimensions. You can scan your ion beam across the material and collect the ions at every point. And as you sputter, you slowly wear away layers of this material. And so you can actually reconstruct a 3D map with almost nanometer precision of every single isotope that was at every location, which is quite cool. As you can see, these master charge ratios can depend on which isotope of silicon you have, what sort of cluster, what molecule, what charge you have. And you can do some pretty crazy analysis of things to even figure out what sort of compounds exist on surfaces, because sometimes you sputter off whole molecules. They're going to have their own mass to charge ratio. And that's what we did for this lead bismuth work. Lots of XPS spectra to jump through. We wanted to find out which oxides were forming and in what order. Which one's the best one I want to show? Think it's this one. So in this case, we used ion sputtering to sputter away surface layers to a depth of a few hundred nanometers, and we're actually able to show that the chromium oxide, right here, was on the outside of the sample, followed by silicon oxide, followed by iron metal. So in this way, we were able to figure out using XPS, the nature of the oxides and using SIMS, the order of the oxides, so not only how fast were they growing to nanometer precision, but in what order did they form. And that helped us figure out this sort of synergistic chromium and silicon oxidation mechanism that helps really protect the layers of the stainless steel and explain why it's corrosion resistant lead bismuth, all using principles from 22.01. So it's about two of five of. So I wanted to stop and see if you guys have any questions on these analytical techniques, knowing that we're actually going to go do a couple of these next Friday. Has anyone used any of these before? Yeah, which ones have you used? AUDIENCE: SEM and XPS, XPS [INAUDIBLE] MICHAEL SHORT: SEM and XPS? OK, cool. Yeah, we've got all these instruments I think except for SIMS here at MIT. Yeah. Yeah. AUDIENCE: Sorry, what is that second equation on the energy for Compton scattering? MICHAEL SHORT: This would be the energy of the Compton electron that comes out when a photon scatters off of it. So the photon will end up losing some energy, and the Compton electron will pick up that energy. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Sorry? AUDIENCE: What's the denominator of that? MICHAEL SHORT: It's a 1 plus alpha times 1 minus cosine theta, where alpha-- I'll mention what alpha is. It's a ratio of the photon energy to the electron rest mass energy. This is kind of a nice-- on these two boards right here, it's kind of a nice summary of the stuff we've been doing over the last three weeks or so, and then all the stuff I showed you today is what you can do with it. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 13_Practical_Radiation_Counting_Experiments.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: OK. I think things have been getting pretty derivy lately, so I wanted to shift gears to something a little bit more practical. So I started alluding to this hypothetical radiation source I might have right here, and things like if you have a source of known activity, which we calculated yesterday, and you have a detector of unknown efficiency, how do you know what the efficiency is? How do you know what, let's say, your dose distance relationship is? And how do you calculate all this stuff? So let's take the general situation that we're starting to work out. Let's say we have a Geiger counter right here. That's our GM tube. And we have a point source that's emitting things in all directions. Let's go with the stuff from yesterday. Let's say it's a cobalt 60 source. It's now 0.52 microcurie. The question is, how many counts do you expect in this detector when it's a certain distance away? So I've actually laser-cut out a little Geiger counter jig from a previous class. And you guys can all do this too. Who here has been to the IDC before? A couple. The international design center-- so they've got a laser cutter that you can sign up to use, which is where I did this. And it's set to just take a Geiger counter and put your sources at some fixed distance away so you can discover the dose distance relationship with things. Speaking of, does anybody know what the relationship is between dose and distance or measured activity and distance? Yeah, Luke. AUDIENCE: [INAUDIBLE] r cubed. MICHAEL SHORT: Close. It's, let's say, the measured activity would be proportional to 1 over r squared. Who knows where this comes from? I'll move the source a bit away to lessen the beeping. Yeah. AUDIENCE: Well, the flux of particles coming out is just [INAUDIBLE] over the surface area of [INAUDIBLE] and the [INAUDIBLE] is 4 pi r squared. MICHAEL SHORT: Yeah, exactly. If you were to draw a hypothetical sphere around the source right here, then you've got, let's say, a detector that's roughly rectangular with a fixed area. Let's say it's got a half length L and a half width W. Then the area-- I'm sorry, let's just say length L, width W-- would be just L times W. And actually, what Chris mentioned as the solid angle subtended by this detector right here-- in other words, at a certain distance r away, how much of this sphere-- how much does the area of this sphere-- does this detector take up? In other words, how many of these gamma rays are going to go in a different direction than the detector, versus how many we'll actually enter the detector? And a simple formula for the solid angle is just the surface area of whatever you've got over r squared. It's a pretty good approximation to the solid angle of something for very long distances, and it's probably the one that you'll see in the reading. But I wanted to show you the actual formula, in this case, for a rectangle-- solid angle comparison. Good, that's up there. So let's say on the x-axis, right here, this would be distance from the source to the detector in meters. And I've said that we've got some sort of a detector that is 2.5 by 10 meters in size. That's an enormous detector. Let's actually switch it to the units right here. So this is roughly 10 centimeters long. So let's change our length to 0.1. And what do you think the width of this Geiger counter is in meters? AUDIENCE: A centimeter MICHAEL SHORT: A centimeter. 0.01. We're going to have to change our axes so we can actually see the graph. So instead of looking all the way out to 15 meters away, let's look one meter away, maybe less. This whole thing is probably 50 centimeters. And we'll take a look there. And what we notice is that except for extremely short distances, this approximate formula for the solid angle-- or in other words, if I were to draw a sphere around the source that's the radius of the distance between the source and the detector, how much of that sphere's area does the detector take up? This approximate formula-- the blue curve-- is a pretty good approximation of the red curve until you get really, really close to 5 centimeters away, or about this distance right here. Does anyone know why this formula would break down? What happens as r goes to 0? What happens to our solid angle or our approximation for our solid angle? AUDIENCE: Goes to Infinity MICHAEL SHORT: It goes to infinity, right? Can a detector actually take up infinity area on, well, anything? Never mind that unit sphere. Not quite. If you were to take this detector and bring the radius down to 0 so that the source and the detector, if not counting for the thickness of the plastic, were right upside each other, if that solid angle went to, well, infinity , then the count should go to infinity, and it does not compute. Does anyone know how many-- first of all, who here has heard of solid angle before? So a little more than half of you. That's getting clicky. I'm going to turn that off. Solid angle is kind of the analog to regular old angle, except in 3D. So instead of looking at things in radians, this has the unit of what's called steradians-- steradians-- with a full sphere taking up 4pi steradians. Interestingly enough, 4pi is also the surface area of a unit sphere with radius of 1. So that's where this comes from. If something were to completely cover a unit sphere-- like, if you were to, let's say, encase a light source in tin foil completely, and say, how much of that solid angle does the tin foil encase? It would be 4pi steradians, regardless of the size of the sphere or how much tin foil you had to use. So this pretty simple formula isn't the best approximation for it. And I'm not going to go through the derivation, because like I said, today is going to be a more practical nature. There is a more complex and rigorous formula for the solid angle of something, let's say, in this case, a rectangle of length L and with W, from a certain distance r, or, in this case, on our graph, x away from the sphere. And you can actually see that red curve right there. Once you get to a few centimeters away, it's pretty close. Anyone want to guess what the maximum value of the red curve is? If I take this source and slam it right up next to the detector, how much of sphere is the detector subtending? AUDIENCE: 2pi MICHAEL SHORT: 2pi-- half the sphere. Because let's say this whole side of the source is completely obscured by the detector and this whole side is free to move. And if you look really closely, yep, at 0, the correct formula does give you 2pi steradians. Which is to say that half the gamma rays leaving the source would enter the detector. I didn't say anything about get counted yet. That's where the detector efficiency comes in. And that's something we're going to be measuring today, which is why I have my big bag of burnt bananas. These are the ashes of roughly 50 pounds of bananas charred to a crisp at about 250 Fahrenheit for 12 hours in most of the dorms and a couple of the frat houses. So last year, I had the students, everyone, take home about 50 pounds of bananas or 50 bananas-- I forget which one. It was a lot. And we did some distributed labor. So everybody peeled the bananas, put them in the oven, baked them, separated off the tin foil, baked off as much water and sugar as possible to concentrate the potassium 40 in the banana. So there's a reason I've been using potassium 40 as a lot of examples in this class, because you're full of it. That's pretty much the short answer of it. If you eat bananas-- which, I think most of you guys do-- you're intaking a fair bit of radioactive potassium, which is a positron emitter, and also it does electron capture and all that fun stuff. So today, what we're going to be doing is calculating the activity of one banana. But that's kind of a very difficult thing to do. So anyone know how radioactive one banana actually is in any units at all? Whatever it is, it's very, very, very, very little. One banana contains a minuscule but measurable amount of radioactivity. And one of the ways to boost your confidence on any sort of radiation measurement is to boost your signal strength or to boost your counting time. And because I don't want to count for the next seven years, we've concentrated the ashes of 50 pounds of bananas in here to boost your signal strength, which is going to boost your count rate, which is the intro I want to give to statistics certainty and counting. So let's take one of the homework problems as a motivating example. You guys, did anyone notice the extra credit problem on the homework? Let's start talking about how we'd go about that. That should motivate the rest of the day. So I'll pull up that problem set, number 4-- which, by the way, is due Thursday, not Tuesday, because we have no class on Tuesday. That was a surprise to me, but whatever. I'll still be here. We don't get holidays-- just you guys. So bonus question-- go do this. So we all know that smoking is a major source of radioactivity. And if you think about it, it's not just the smoke that contains those radiation particles, it's got to be the cigarettes, cigars, and other smokables themselves. And so I was thinking, there's no better concentrated source of smoking radioactivity than a smoke shop. There's one out at [INAUDIBLE] at the end of the T. There's probably some closer to campus. But I know there's a whole bunch that are T accessible. And so I was thinking it'd be neat for us to find out, how radioactive is it to work in a smoke shop? Because there's all these radon decay-- oh, yeah? You actually know. AUDIENCE: You know you have to be 21 to go into a smoke shop? MICHAEL SHORT: Are you serious? But you have to be 18 to smoke. AUDIENCE: Yeah. It's a Cambridge, Boston law. MICHAEL SHORT: Interesting. We may have to leave the city for this one. [LAUGHTER] What about Somerville? I think-- AUDIENCE: It's still-- you're not allowed to go into there either. It's all of Massachusetts now. MICHAEL SHORT: Wow. AUDIENCE: So [INAUDIBLE] [INTERPOSING VOICES] AUDIENCE: [INAUDIBLE] you can buy them. It's still late-stage. It's like town-to-town. Most of the Boston area is 21. But once you leave Boston-- MICHAEL SHORT: It varies. AUDIENCE: Yeah. MICHAEL SHORT: Yeah. I don't think it is where I'm-- from Swampscott, I don't think it's 21. But that's kind of up on the commuter rails. You don't want to go to Swampscott. At any rate, I would think that, OK, it's probably a fairly radioactive place to work. But the question is, how long would you actually have to bring a detector in and count in order to be sure that there's any sort of measurable difference? And so, without deriving all of this stuff about binomial, Poisson, and normal statistics, I'll say, that's in the reading for today. I want to show you some practical uses and applications of this stuff. Let's say you were to measure some count rate in some experiment. And we'll put this in units of counts per minute, which would be the number of counts divided by the counting time. That's about as simple as it gets. From Poisson statistics, you can say that the standard deviation of that count rate is actually just the square root of the count rate divided by time. And that's kind of the simple thing right here. But usually, in these sorts of experiments, if you want to know how much more radioactive is one place than another, you have to take a background count. So if I wanted to know how much activity that source was giving off, there is lots of background radiation that we'll be going over in about a month. I would have to sit here for quite a while and wait for the slow clicks of whatever background radiation is in the room-- there we go-- to get enough of a count right going on. As you can imagine, the slower the count rate, the less certain you can be that the number that you're measuring is actually accurate. So the idea here is that this standard deviation is a measure of confidence that your value is actually right. So the two things that you could do to decrease this standard deviation-- you could increase your counting time. Why is there a C on top? That doesn't look right. It actually is OK. Yeah. Yeah, there we go. So by counting for longer you can decrease your standard deviation. This is going to take forever. It actually takes about 67 minutes, because we've already done this calculation, to get a 95% confidence on 5% uncertainty for this sort of background count. I mean, how many counts we have so far, like, 12? 14? Yeah, not very many. Then you've got to be able to subtract that count rate from whatever your source actually is. And the way that you actually measure this is pretty straightforward. The way that you do error subtraction is not as straightforward. So let's say we're going to separate these two experiments into a background experiment, which we're actually going to do in an hour. When we want to count these banana ashes, we're going to have to count radiation coming from the detector itself, which will account for cosmic rays, contamination in the detector, whatever else might have been spilled in there from previous samples. And we're also going to take some sort of gross count rate, which will be our background plus the net count rate of our actual source. And that's what we're going for. So the net count rate is pretty easy. It's just the gross count rate minus the background-- let's keep the symbols the same-- count rate. Does anyone know how to quantify the uncertainty of this net count rate? Do you just add the two? Well, in this case, we have to account for the fact that radiation emission from anything is a truly random process. So it's actually random. There is no correlation between when one particle leaves and the next particles going to leave. And because it's a truly random process, these errors in the background rate and the gross rate could add together or could subtract from each other. In other words, one might be a little higher than it should be, one might be a little lower than it should be. If you just add together the two standard deviations, you actually always get an overestimate of the true error, because you're not accounting for the fact that these two experiments may have partially canceling errors. So in this case, that would be your worst case scenario, which is not your most likely scenario. What you actually want is to do what's called uncertainty in quadrature, where you actually add up the sum of the square roots of those errors. It kind of looks like the magnitude of a vector, doesn't it? It kind of looks exactly like the magnitude of a vector. So in this way, you're accounting for the fact that more error in each experiment does increase the error on whatever net experiment you're doing, but not linearly. Because sometimes you have partially canceling errors. And with enough statistics, if you count for long enough or you count enough counts, then these things, on average, are going to add in quadrature, which will come out to-- and I want to make sure we don't have any typos, so I'll just keep the notes with me-- so you'd need the background count over the background time squared, plus those. There we go. And so, now, I'd like to pose a question to you, the same one that's here in the problem set-- how long do you have to count in the smoke shop to be 95% percent sure? So let's say your count rate's 5% uncertain. And we're going to spend the rest of today's class taking apart that statement and getting at what it should be. So again, what we want to say is, how do you know that we're 95% confident of our count rate plus or minus 5% error? That's the main question for today. Does anyone know how we'd start? Anyone get to the reading today? I see some smiles. OK. We'll start from scratch, then. All right, So who here has heard of a normal distribution before? A lot of you guys. Great. The idea here is that with enough counting statistics, this very rare event binomial distribution approaches a normal distribution, where you can say if you measure a certain count rate-- let's say this would be your mean count rate-- to limits of plus or minus 1 sigma or one standard deviation, 1 sigma gives you about 68% confidence in your result. Yeah, I spelled it right. The reason for that is that if you go plus or minus 1 sigma away from your true average right here, you've filled in 68% of the area under this normal distribution. Similarly, if you go plus 2 sigma or minus 2 sigma, it's around 95% confident. 3 sigma is getting towards 99 point-- what was the number, again-- I think it's 6. Maybe it's more like 98.5%. And then so on, and so on, and so on. There's actually societies called 6 sigma societies. And the way that they get their name is we're so confident of things we can predict them to 6 sigma, which is some 99 point a large number of nines percentage of the area under a normal distribution. So if I ask you, how long do you have to count to be 95% confident in your result, you have to give an answer that will relate two times this standard deviation. And now we know the formula for standard deviation of this net counting experiment. So we can formulate our equation thusly-- let's say in order to be 95% confident, in other words, 2 sigma, that our counting rate is within 5% of the actual value, in other words, plus or minus 5% error, we put our error percentage here, and our true net count rate there. So this part right here tells us the 95% confidence. This part right here is our 5% error. And that part right there is our count rate. So then we can substitute in our expression for sigma-- our uncertainty in quadrature-- and find out things like, well, it depends on what the information we're given is. Let's say before you go to the smoke shop, you take your Geiger counter, and for an extremely long time you count the background counts somewhere. So let's say in this problem the known quantities-- we know our background count rate, because you can do that at your leisure at home. And when I did this, it came out to about 25 counts per minute. And known is the background counting time. And when I did this, to get within 95% confidence of 5% error, I had to do this for 67 minutes. And now, all that's left is we want to relate our net count rate and our gross counting time, or our gross count rate and our gross counting time, because it's the same thing. So this is actually how you decide how long you have to sit in the smoke shop to count in order to satisfy what we asked for-- 95% confidence that your count rate is 5% error. So let's start substituting this out. That's not mine, so we can get rid of that. So we'll take that expression and substitute in everything we can. So 0.05 C n equals 2 sigma. And there's our sigma expression, which I'll rewrite right here. So we have see C b over t b squared plus C g over t g squared. What's next? How do we relate t g and C g? Well, let's start with the easy stuff, right? What can we cancel, or square, or whatever? Just somebody yell it out. AUDIENCE: Do we have numbers for these counts? MICHAEL SHORT: Yep. So we have numbers for C b and t b, but not C g and t g. We have not yet answered the question when you go into the smoke shop and talk to the owner, and he says, fine, you're going to sit here with the radiation detector. How long do you have to be here, looking all weird? You want to have an answer. And so if you get some initial estimate of C g, you can tell him this is my approximate t g, at which point he or she will say yes or no, depending on how they're feeling. So why don't we just start, divide by 2, right? Divide by 2. 0.025. We can square both sides. And there's a C n there. Square both sides, and we end up with 0.000625 C n squared equals C b over t b squared plus C g over t g squared. There's lots of ways to go about it. I want to make sure I do the efficient one. Oh, I'm sorry those aren't squared. Because our standard deviations had the square root in them. There we go. That's more like it. What's next? We've got too many variables. Yeah? AUDIENCE: I think there's still a square value [INAUDIBLE] MICHAEL SHORT: Isn't there still a what? AUDIENCE: Isn't there a square value still under the [INAUDIBLE]? MICHAEL SHORT: Because, in this case, the standard deviation is the square root of the count rate over the time. So the standard deviation squared is just count rate overtime time. Was there an earlier expression we have to correct? Yep. [LAUGHTER] That's where it came from. That's right. That's not. Because that's right. There we go. Good. Good, tracing out that. OK. Now that everything is corrected here, what's next? We've got too many variables. Yeah? AUDIENCE: [INAUDIBLE] the standard deviation have units of [INAUDIBLE]? MICHAEL SHORT: Not quite, because there's a count rate in here. So the units of standard deviation, if this is square root of count rate over time, which is the same as number of counts times time over time, right? AUDIENCE: OK. MICHAEL SHORT: Yeah. Because again, a count rate is a number over-- where'd it go. AUDIENCE: Number over time squared. MICHAEL SHORT: Yeah. Number over time squared. That doesn't sound right though. Let's see. Hold on a sec. Although the standard deviation has got to have the same units as the count rate itself, because they're additive, right? Because they usually express some count rate plus or minus either sigma or 2 sigma, so they've got to have the same count rate. So standard deviations are expressed in counts per minute if your counts are expressed in counts per minute. OK, cool. So we've got too many variables, but it's easy to get rid of one of them, either C n or C g. Do you a question? AUDIENCE: No, I was just going to say [INAUDIBLE].. MICHAEL SHORT: Great. So you were going to say the same thing that I was going to do. Cool. So we'll take out our C n, and we'll stick in a C g minus C b. And we're trying to isolate t g as a function of C g or vice versa. There's a lot of C g's and not a lot of t g's, so let's just keep the t g on its own. So we'll have 0.000625 C g minus C b squared. Then I'm going to subtract C b over t b from both sides. Minus C b over t b equals C g over t g. And do I have to go through the rest the math with you guys? I think, at this point, we've got it pretty much solved. We divide everything by C g, flip it over, and you end up with-- actually, I've already written out the expression, which I want to show you guys here. Back to smoke shop counting time. So I want to show you some of the implications of this expression. That number right there is just a more exact part-- a bit of 2 Sigma. Instead of 0.05, we had something much, much closer. So what I want us to look at is this graph right here. We've got a nice relation now between the count rate and counts per minute-- and it was the gross count rate and the required counting time to get to that 5% uncertainty. Well, there's a couple of interesting bits about this equation. What are some of the features you notice? Yeah. AUDIENCE: The count rate is extremely low for [INAUDIBLE].. MICHAEL SHORT: Yes. If the count rate is extremely low, it's going to take an infinite amount of time. You're absolutely right on some level. So if we have that expression right there-- so let me just actually get it all the way out so we can see. Because I want to show you some of the math-related implications for this. So if we had our counting time-- what do we have-- C g over 0.025 C g minus C b squared, minus C b over t b, at what point is this equation undefined? Yeah, Sean. AUDIENCE: [INAUDIBLE] question [INAUDIBLE],, using the second one after the [INAUDIBLE].. MICHAEL SHORT: That's right. So like Sean said, for the condition where 0.025 C g minus C b-- let's just call it C net squared minus equals C b over t b, this equation is actually undefined. Which means that if your C b and t b-- let's say if the uncertainty from your background counting rate experiment is such that you can never get the total uncertainty down to let's say 5% error with 95% confidence, you can't actually run that experiment. Because these uncertainties are added in quadrature, if you're trying to reduce sigma down to a value below that already, how can you do that? You can't have a negative standard deviation, right? So what this actually means is that when you're designing this experiment, even if you count for 67 minutes at 25 counts per minute, like we can now out in the air, that might not be enough to discern the activity of the smoke shop, or the source, or whatever you happen to be looking at to 95% confidence within 5% error. And so let's actually look at that on the graph. If we keep on scrolling up just by adding stuff to the y-axis, eventually we see that it gets all straight. And right here, at about 49 counts a minute, suspiciously close to the background counts, you'll never actually be able to get within this confidence and error interval. So there's always some trade-offs you can make in your experiment. Let's see-- there it is. So sometimes, do you necessarily have to be 95% confident of your result? Depends on what you're doing. Or do you necessarily have to get within 5% error? That's probably the one you can start to sacrifice first. So usually, you want to be confident of whatever result you're saying and be confident that you're giving acceptable bounds. So you can remain at 95% confidence, which means-- where did part go-- which means keep your 2 Sigma, but you can then increase your allowable percent error. So if you can't get within 5% error-- and I believe the homework doesn't actually say that for a reason-- yeah, we don't tell what error to choose. But we do say try to get a 95% confidence. So then the question is, for a reasonable counting time, to what error can you get within 95% confidence? The more error you allow, the shorter time you have to count for. And I want to show you graphically how some of that stuff interplay with each other. Let's say you were to increase your counting time, which we can do here with a slider. So for the same background counting rate, if you increase the counting time, what happens to the uncertainty on your background experiment? Does it go up, down, or nothing? AUDIENCE: It goes down. MICHAEL SHORT: It's going to go down. Yeah. Count for longer-- the uncertainty goes down. I'm going to have to change the bounds here to something more reasonable. So we were at 67 minutes. And now, notice, as you increase your counting time, even though you haven't changed the counting rate, it then takes less time to distinguish whatever your source is. So let's count for less time in the background, you have to count for more time in the experiment until it just kind of explodes. Count for more time in the background, you have to count for less time in the experiment in order to get to the uncertainty and confidence you want to get to. So if you doubled your background count time from 67 minutes to 134, then you can measure count rates as low as 42 counts per minute gross. So when you start going into the smoke shop, you can, let's say, count for a few minutes and get some very crude estimate of the counting rate and then decide how long you have to let your background accumulate so you can distinguish the activity in the smoke shop to within some confidence and some error. Yes. AUDIENCE: So does the background in the case of the smoke shop just the area right outside of it? Instead of the inside? MICHAEL SHORT: It's definitely location dependent. So we will get into background counts and sources of background radiation in about a month. But to give you a quick flash-forward, it depends on your elevation to say how much of the atmosphere is protecting you from cosmic rays. It definitely depends on location. So in New Hampshire, the background count's quite a bit higher, because there's a lot of granite deposits, and granite can be upwards of 52 parts per million radium. Conway granite in particular, named after Conway, New Hampshire, is pretty rich in radium ore. Oh, is that where you're from? AUDIENCE: No. My last name is Conway. MICHAEL SHORT: Oh, there you go. OK. [LAUGHTER] Yeah. It's also neat. You can use background counts as a radiation altimeter. One of my graduate students actually built a Geiger counter interface to an Arduino, where you could actually tell what the height you were flying at is by the amount of background radiation increase. So certainly it's going to depend where you are, right? But you want to make sure that you're in an area, to answer Sean's question, representative of where the smoke shop is. So you can't go into the reactor, and drop this in the core, and say, I'm doing a background count. That's not a valid experiment. So yeah, you'd want to be, I don't know, same block. That would be a pretty good. And then go in there and see, can you measure any sort of increase, get a crude estimate of your C g-- your gross count rate. Use this formula right here to estimate how much time you'd have to wait. So for example, let's shrink our y-axis down a little and be more optimistic than we probably should. Let's say you go in there and you get a count rate of 100 counts per minute. That would do that would surprise me. You'd only have to count for an extra 28 minutes to nail that net count rate with 95% confidence to 5% error. Let's say now, what happens if we increase the allowable percent error? So let's say 10% error would be acceptable. We just take that number and double it. Then, all of a sudden, you don't have to count for nearly as long. So again at 5% error, which means a 0.25 here, at 100 counts per minute, you'd have to count for about 30 minutes. If you're willing to accept 10% error, it goes down to seven minutes and 18 seconds. So do you guys see the general interplay between confidence, percent error, counting time, and counting rate? Who here is built an NSE Geiger counter before? Awesome. So this is definitely a try-it-at-home kids kind of thing. If you want to find out is something radioactive, this is what you can actually use to answer the question, is it discernibly radioactive to within some limit of error or limit of confidence? That's what we're going to be doing here with a much, much, much more sensitive detector. So the only thing missing from our complete picture of going from the activity of a source, which we've shown you how to count, to dealing with the solid angle, which is just a simple formula, to dealing with statistics and uncertainty, is now the efficiency of this detector. Out of the number of radiation quanta or whatever that enter the detector, how many interact, and how many leave out the other side? That's we're going to be spending most of the next month on when we do ion, photon, electron, and neutron interactions with matter. So we'll find out-- what's the probability per unit length that each one undergoes an interaction, what kind of interactions do they undergo, and then we'll complete this actual picture. So you can take a source of, let's say, unknown activity, put it a known distance away from a known detector with a known efficiency, and back out what the activity of that source is with accuracy. That's what you're going to start doing on this homework as well for the banana lab. The only thing you don't know is the activity of this bag of bananas. But we're going to give you all the information, like the efficiency of the detector and the geometry of the detector, and you're going to be able to measure the number of potassium 40 counts that the detector picks up. So by taking-- let's see where we have some space left. We had a little bit here. So by taking that number of counts and dividing by, let's say, the efficiency of the detector, where that efficiency is going to range from 0 to 1, probably much closer to 0, and also dividing by, let's say, your solid angle over 4 pi to account for how many of the emitted potassium 40 gamma rays actually get into the detector and dividing by 2 gamma rays per disintegration-- I think that's what we had last time. Or was that cobalt 60? Yeah. We've been using cobalt 60 as an example. So remember, we had two gamma rays emitted per cobalt 60 disintegration on average. Then you can get to the actual activity of the source. Once you know the activity of this bag of bananas, you can then divide by either the mass of one banana, or the number of bananas, or whatever to get the final answer. That's what we're going to spend the rest of today doing. So since it's getting on five out of five of, do you guys have any questions about what we covered today or what we're about to go do? AUDIENCE: You said that for solid angle you wouldn't do this. MICHAEL SHORT: Yep. AUDIENCE: So for solid angle, it's [INAUDIBLE] to the surface area over y squared. And in this situation, does solid angle over 4 pi mean that you can only have a maximum of half of the sphere? MICHAEL SHORT: Not necessarily. Let's say you were to encase your detector in an infinite medium of radiation material. Then you could subtend 4 pi. So the idea here is that if you captured every single gamma ray, your solid angle would be 4 pi. So if your solid angle is 4 pi, then that would equal-ish the area over r squared of your thing. But this is actually not that good of an approximation when you put a source very, very up close to a detector. So there are actual formulas for solid angle, where the real formula for a solid angle, you actually end up having to do a surface integral of the sine, which accounts for the fact that the object that you have might be, let's say, tilted towards or away from the detector, times some differential d phi d theta of this unit sphere. So you'll have to integrate to say how many of these little d phi d thetas are actually subtended by your detector. And the value of that actual surface integral gives you the real solid angle. That's the super simple one if you just know the area of something and you know that you're kind of far away. But again, whenever possible, use the exact formula. So any other questions? Yeah, Sean. AUDIENCE: You said that that expression is a true statement [INAUDIBLE] per second, right? MICHAEL SHORT: The two gammas per cobalt 60? This one? AUDIENCE: Yeah. MICHAEL SHORT: That accounts for the fact that if you remember the decay diagram for cobalt 60, how does that decay? By beta emission. It goes to one energy level, and it tends to go down by two gamma decays to nickel 60. So each time it gives off a gamma ray to one level and a gamma ray to another level. So in this case, one becquerel of cobalt 60 would give off two gamma rays per second. So if you're measuring a number of counts, and each count, one gamma ray was responsible, you have to then divide by the number of gamma rays per disintegration on average in order to get the actual activity of that source. Because remember, activity is measured in disintegrations, not in number of gamma rays emitted. That's the difference here. Dose-- you'd actually care about how many gamma rays you absorb. But activity is how many atoms are disintegrating per second. Yeah. AUDIENCE: What units of cobalt 60 [INAUDIBLE]?? MICHAEL SHORT: The units of cobalt 60? AUDIENCE: It's just two gamma-- MICHAEL SHORT: Oh, this would be, like, atoms of cobalt 60. And those gamma rays would be gammas per atom. So in this case, it's like two gamma rays per atom of cobalt 60 disintegrating, or better yet, per disintegration. So you've got to know what material you're looking at in order to know how many gamma or how many betas or more that you're going to get per disintegration. Who here has heard of this uncertainty in quadrature before? There's a couple folks. OK. Yeah. The idea here is that, again, if you just add the errors up, you're probably overestimating the error and selling yourself short. Cool. In that case, if there's no questions, let's go do this. So follow me to the counting lab. MICHAEL AMES: OK. So this is my counting lab. These are three high-purity germanium detectors. Have you explained high-purity germanium detectors? MICHAEL SHORT: No, we haven't. MICHAEL AMES: OK. Have you explained any detectors? MICHAEL SHORT: Just the Geiger counter we were playing around with today. MICHAEL AMES: OK. Well, here. Down in here there's a little high-purity germanium crystal with a couple thousand volts across it. When a gamma ray goes into it, it makes some electron hole pairs. Nod when I say electron hole pairs. OK, good. And basically, you get more electron hole pairs the more energy of the gamma you have. So you collect the current from that, and you get a little pulse of current, and the height of the pulse tells you how many hole pairs you had, and then back it up to what the energy or your gamma was. That works fine if you collect all of the gamma energy. You don't always quite do that. Anyway, so that's how-- You all can scooch up. There's not a whole lot to see in there. MICHAEL SHORT: It's worth a look. If you've never seen it. MICHAEL AMES: It's worth a look. You can't really see the crystal. There's just an aluminum cylinder in there. The black part is just a carbon fiber window, because you don't want to cut off the low energy gamma. So it's got a really thin carbon fiber window on it. MICHAEL SHORT: What's with the hundreds of pounds of copper around the side? MICHAEL AMES: What's with the hundreds of pounds of copper on the side? There's not hundreds of pounds of copper on the side. These guys are lead. MICHAEL SHORT: Ah-hah! MICHAEL AMES: Which does two things-- it shields the detectors from the activity out here, from you guys, from the activities coming out of here-- because sometimes I'm counting very low activity samples-- and it also, if I'm counting something that has a lot of activity, it shields us from that activity. So it kind of goes both ways. The reason there's copper is if you get a high energy gamma ray into some lead, it makes x-rays. And it makes a very nice 75 keV-- do you guys know keV? Good. MICHAEL SHORT: We've done x-rays. MICHAEL AMES: Awesome. So it's a really, really nice 75 keV x-ray that interferes with trying to count things around 75 keV, because you're getting all these x-rays coming out of lead. So you line it with copper, which makes a lower energy x-ray and filters out the lead x-rays. So anyway, so this is I've got two germanium detectors. That ones also germanium, but it's a well detector. So it's got a little one-centimeter hole in so you can stick a sample right in the germanium. They're hooked up through a little electronic box and go into the computer over there that does all the peak height analysis. Oh, yeah, liquid nitrogen [INAUDIBLE].. Thanks for pointing. Yeah, you cool the electronics and everything down so it cuts out the thermal noise. Because you're looking for really tiny little signals here, so you cool everything down. And that way, it's not too noisy. These guys are OK warming up. It doesn't destroy the detector. The old detectors you had to keep cold all the time. And if they warmed up, then they were just paperweights. So this is just the counting lab. I've got an actual sample counting in here right now. We'll take a look at the spectrum in a minute. Your bananas are going to go here. And let's see if we can smash it down. Yeah. Because it would be nice if I can close the lid. Oops. MICHAEL SHORT: Well, almost. MICHAEL AMES: Almost. Well, smash this down. Here, one you guys do this. Here, you. Smash that down until it fits in there. Although, don't break the bag. Oh! OK, we'll get another bag. AUDIENCE: Oh, did I break it? MICHAEL AMES: It's OK. It's just banana ash. We'll find another bag. It's OK. You know, I'm all about making mistakes. AUDIENCE: [INAUDIBLE] MICHAEL AMES: Yeah, yeah, yeah, just be a little more gentle. We'll throw some duct tape on it, and it'll be fine. So you're looking for potassium 40 in your bananas, correct? Where else do you think we got potassium 40? Or do you think there's any other potassium 40 in the room? AUDIENCE: In us. MICHAEL AMES: Yeah, right. So when you do the banana count, we frequently take a spectrum on this with the lid closed, and we always see potassium 40. There's potassium 40 everywhere. So after we get the count of the bananas, we'll take a background count. You'll want to subtract the two signals. MICHAEL SHORT: We just did 15 minutes ago. MICHAEL AMES: You're so ahead of me. OK, I think that's all-- Is this going to fit now? AUDIENCE: [INAUDIBLE] MICHAEL AMES: OK. Close enough. I've got this thing-- I've got a whole bunch of little spacers if I'm counting something that's hot. And by hot, I mean radioactive hot. I'll space it out a little further. AUDIENCE: Need a little more smashing? MICHAEL AMES: No, that's fine. We just got to close the lid. And if I've got something that's very radioactive, I'll just space it out away from the detector. If you've got something that's really hot, it just kind of swamps out the electronics. MICHAEL SHORT: We did just go for a solid angle too, today. MICHAEL AMES: There you go. Is there anything else I want to say in here? No, let's move this way. This is the spectrum I'm collecting on MIT 1. Right now, I don't know-- how long has that been going? Half a day-- less than that. Anyway, so this is a sample of quartz that was irradiated next to the reactor. You guys are going to do shorts in like a month-- did you bring your samples? MICHAEL SHORT: We're getting them. MICHAEL AMES: OK, good. Anyway, this is a sample of quartz that was irradiated in the same spot you guys are going to do your irradiation, sort of in the graphite region of the reactor. The reason we're running it is the people who are looking at this quartz want to run it for 80 hours, and we'd like to know if there are any impurities in it that'll cause grief-- meaning a lot of activity when it comes out. So we run it for a short period. I think this ran six hours. And it's just a little tiny piece. And so I can look at the gamma spectrum coming out of this. So you can see, there's a whole mess of peaks in here. This one-- you see that? You see that lovely, little peak right there? Can you all see that? Nod. Yeah, OK. So that's the full spectrum. That's the peak. That's a tungsten 187 peak. So I did put up one little thing right behind you. Have you all seen the chart of the nuclides? This thing? MICHAEL SHORT: Every day. MICHAEL AMES: Every day! Good. I've got one of these on every wall in every lab in office and a little handbook Yeah. So the tungsten 186 activates into tungsten 187. So if you've looked at the chart of the nuclides, you can tell that there's all the sort of parameters you would need to calculate how much activation you'd get based on neutron flux, and time, and cross. The 28.43, that's the abundance of that isotope. You can see the sigma gamma 38, that's the cross section for thermal neutrons. And so that's how likely you'll get from 186 to 187. 187, that's the half-life-- 23.9 hours. So with all of that-- oh, and underneath the 23.9, you've got what the gammas are-- 685, 479. it's got a whole mess of gammas. So that's a bunch of the gammas in here for that. So you could, knowing how big that peak is, what the efficiency of the detector is for collecting that peak in that geometry, the half-life, the cross set-- that whole mess of parameters-- back-calculate how much tungsten is in the sample. So that's kind of how NAA works, which I assume you've explained. MICHAEL SHORT: We have. MICHAEL AMES: OK. MICHAEL SHORT: Actually, the whole idea behind doing those short NAA activations is these guys are going to calculate what's in their samples. MICHAEL AMES: There you go. MICHAEL SHORT: Once we get the date. MICHAEL AMES: But that's not how I do NAA. MICHAEL SHORT: [INAUDIBLE] We're doing a simplified version. MICHAEL AMES: Right, right. No, no, no. So there's two things that you could do. One of the things you could do is you take all those nuclear parameters and you calculate it just from the peak height. The other way that everybody who does NAA-- almost everybody who does NAA-- is you run a standard material. Any of you guys chemists at any point in your life? You all took some chemistry at some point? OK. So you've run a standard, which means a material that how much tungsten is in it or how much a whole mess of other things are. So I run a bunch of different standards. So along with this piece of quartz, I ran a standard, irradiated it at the same time. I'll count the quartz and then I'll count the standard. And by comparing the peak heights and doing all the decay corrections and the weight corrections, then I calculate how much tungsten is in my sample. So I don't actually use the cross sections, or the flux, or any of that other stuff-- all of those parameters disappear. Notably, the detector efficiency disappears out of the equation, because that's the parameter that you usually have the funniest idea about. And so you reduce the uncertainty in your concentration by doing this sort of comparative method with a standard. That all make sense? OK. So when we run shorts, I guess, in a month, we'll take whatever your samples are. I've had feedback about, oh, God, you don't want to run that many samples. But we'll figure out how many samples we'll run. MICHAEL SHORT: It's one per person. [INAUDIBLE] MICHAEL AMES: That's a lot of shorts. MICHAEL SHORT: In pairs, right? MICHAEL AMES: Yeah. So I'll show you how the shorts get run. So when we run your shorts, we'll run your samples and we'll run standards, and then you can do the comparative method. Or, if you feel like it, you can do the other method, depending on what exercise-- MICHAEL SHORT: The other method. MICHAEL AMES: You're going to do the other method. You don't want to do the standard method? MICHAEL SHORT: Oh, no, no no. We're drilling comprehension, not [INAUDIBLE].. MICHAEL AMES: Not practical? Oh. MICHAEL SHORT: What happens if the computer break down? MICHAEL AMES: Well, if the computer goes down, you can't get any data anyway. MICHAEL SHORT: Oh, [INAUDIBLE]. MICHAEL AMES: I can do the comparative one on an envelope. Anyway-- well, we'll run standards or not, depending on how you guys are feeling. So that's that. Oh, right. Let's count your bananas. So this is detector 2. We did an energy calibration earlier today. So actually, I've got a couple of little button sources. Have you seen the button sources? Yeah. So that's just a couple of cobalt 60 lines and a cesium 137 line down in here. And I know where those energies are, so that just gets used to calibrate the detectors. MICHAEL SHORT: We were playing around one of those cobalt 60 buttons today in class. MICHAEL AMES: There you go. MICHAEL SHORT: We mentioned the two gammas per disintegration, and there they are. MICHAEL AMES: There they are. They're kind of small there because my buttons are probably 30 years old. MICHAEL SHORT: Oh, I got some fresh ones. MICHAEL AMES: Yeah. So anyway, we cleared that out. And we just hit Start. And we're not going to see anything a while. Where are we? Oh here. 14-- anyway, your banana peak will end up out in here. So it'll take a while. We're going to let this count until Tuesday. Because, why not? And I don't feel like coming in over the weekend and turning it off. So yeah. So this is just picking up all the gammas coming out of the bananas, and everything else that happens to get through the [INAUDIBLE],, and all the contamination on the inside of that. And we just let it count. And then you guys can calculate how much potassium 40 is in your ashes. You'll need to do the background subtraction. I will give you-- MICHAEL SHORT: Do you have background spectra? MICHAEL AMES: Yeah. We collect background spectra once a month or so. So I'll give you a background spectra. I will provide the efficiency for this geometry, which is pretty poorly defined, because I've got a program that'll do that. And I can't give you the program, and it's a pain in the neck to run anyway. If we've got a really well-defined geometry that's not a big bag, usually I try to count sort of point sources-- so I've got an efficiency standard that I can use that I know what the disintegrations in that are at a lot of energies, and I use that to do an efficiency calibration, usually. But I don't have an efficiency standard that's that big. It's just a point source. And I think that's the practical NAA. From this end, did that all makes sense? I want you guys to nod, not him to nod. Yeah. MICHAEL SHORT: Do you guys have any questions for Mike on what you've just heard? Well-timed, because we were just talking about this stuff all week. MICHAEL AMES: Good deal. For neutron activation, that's kind of a real common part of the chart. So there's the manganese, iron, cobalt, nickel. One of the things-- what you'd like, usually when you're doing NAA, is you want a nice thermal neutron spectrum. You know what thermal neutron spectra means? Real slow neutrons. And they'll just give you sort of an n gamma reaction. So on that chart, iron 58 to iron 59, that's a nice n gamma reaction. And that's the one I use to analyze for iron. If you're near the reactor, you're also getting some fast neutrons, which can give you an n p reaction. So if you're looking on the chart there, cobalt 59, if you get an n p reaction, will also make the iron 59. And that's a pain in the neck, because if you've got iron, you've always got a little cobalt floating around-- you maybe need to do a correction. So in practical terms, when you're running NAA, you really want to avoid having all these fast reactions. There's usually an energy threshold for the fast reactions, like 1 meV or so. MICHAEL SHORT: Sound familiar from the cube equation? MICHAEL AMES: Yeah, OK. Right. The place where we do the irradiations is very thermal. It's got a very low, fast spectrum. So I don't usually have to worry about that. There's a couple of times I actually use the fast n p reaction. If I want to measure nickel, you can see nickel 58, an n p reaction will get cobalt 58. And since there's not a good reaction n gamma from cobalt 57, cobalt 57 isn't around usually. So that's how I measure nickel, using n p reaction. And I need to put the rabbits into where I've got a fast flux in the reactor. Which, well, they've got a couple of spots for that. I try not to have to measure nickel, because it's pain in the neck. But sometimes people want to know nickel. And we talked a little about what we've run in here for types of samples. MICHAEL SHORT: Well, why don't you tell us? MICHAEL AMES: OK, OK. So back 15, 20, 25 years ago, we did a ton of environmental samples in this lab. We had a whole three grad students, myself included, who did atmospheric particulate matter, rain water, snow, we even did some fog collection, which is kind of fun, ice cores, which are old particulate deposition. And it was all for trace elements in those kind of environmental samples-- also lake sediments. Other analytical methods have gotten a lot better, and so they've kind of caught up to NAA, and you don't need a reactor to run those. So the environmental side of this has kind of quieted down a lot. But it's still useful for a bunch of things. And so I do some work here now. I also work in the NCORE group. So that's a lot of my time, rather than just this lab. Practical things-- let's go take a look at a couple other labs. You're not on wheels? You don't have a steady cam? MICHAEL SHORT: I got a question. MICHAEL AMES: OK. You've got a question. MICHAEL SHORT: What's the weirdest thing you've ever been asked to count? MICHAEL AMES: The weirdest thing I've been asked to count? That's already activated, or? MICHAEL SHORT: At all. MICHAEL AMES: OK. I don't know-- brain tissue. Fish samples that we actually did the fresh fish samples. And you want to kind of homogenize those. And we had this kind of titanium blender-- you remember the Bass-O-Matic? We had this titanium blender that we dropped the fish in, and you completely homogenized the fish, and then you took a little sample of it, and freeze dried it, and then analyzed it for mercury. MICHAEL SHORT: [INAUDIBLE] MICHAEL AMES: Yeah, right. Because, I mean guys saw, the rabbits are only this big, and the samples I want are only that big. And so to get a representative fish, you want to kind of make a fish smoothie and then take a sample out of that. We did have a guy who came to me and was promising we were going to do this giant study using fingernails and toenails for nutritional analysis. He was working with a group that looks at zinc deficiencies, and fingernails and toenails will give you a good record of how much zinc you've had over the last week, or month, or whatever-- depend where you cut the nails. And so I was going to get a couple of hundred African children's toenails. That didn't happen. But I did analyze my own toenails. Well, if you went to somebody who was a little suspicious of you, asking for toenails is a lot easier than asking for a blood sample. Because people would give up toenails-- it's not a big deal. Have you ever seen the movie or read the book Civil Action, about the superfund site in Woburn. It was a big old superfund site, and Woburn had arsenic and chromium contamination. There used to be a lab-- I forget which building it was in-- that did a ton of research there. One of the things we did in this lab was we collected baby hair samples from people's scrapbooks. So we had baby hair going back 50-60 years-- dated, because everybody knew how old their kid was-- and we analyzed the hair samples for arsenic and chromium, and then we plotted out where they were, when the sample was taken, and how close they were to some contaminated wells. And because we did a fairly short of radiation, after a while the activities died down and we gave the samples back. And we found that it didn't correlate with the well water or the time when the contamination was the worst, which made people happy in retrospect, that the contamination from that area didn't get into the well water. That was in the mid-90s or so. Anyway, that was one of my samples. And the hair is a pain in the neck to work with. So I hope none of you give me hair samples. I won't run them. So let's go down the hall, this way. You all got to follow. And so this is just a fine powder. And it's fly ash from a coal-fired power plant. Fly ash means the ash that goes up the smokestack, as opposed to bottom ash which is what falls down. And so, they collect a whole hundreds of kilograms of fly ash, just homogenize it, sieve it, send it out to a lot of labs to analyze-- NIST is really good at this-- take all the data. And so this ash is characterized for about 20 elements or so. So when I run my samples, if I were to run your samples with standards, I'd run a little bit of this, 5, 6, 7 milligrams. And I know what the concentrations are in this. And so that's how I do the comparative method. And so I got this. And they all look the same. And this is some soil from Montana next to a mine, so it's nicely contaminated with some metals. This is my IAEA mercury and hair standard. But again, it's just a little powder. And this is kind of what everybody uses for standards. And you just kind of have a whole collection of them. And depending on what elements you're looking for, you try to mix and match them so you cover what you want without having to run five or six of them. This is my hot lab, or one of my hot labs. You guys, last week, or whatever it was, I came by-- so this is the rabbit. Those you who weren't there, these are called rabbits because it's the little thing that runs through the pneumatic tube. You guys are doing [INAUDIBLE] later today? Yeah. When you're sitting at the control panel, there's a button, I think it's to the left, and it says insert rabbit. And that's what this is referring to. For longer radiations there's a spot in the basement in the reactor where they can get these, and they send them into the irradiation location. For short irradiations, like what you guys are going to be doing in a month, I send them in from here. That's OK-- I just don't want to bump into that thing. So this is one end of the pneumatic system. And so I can put a couple of samples in here. I stick it in that little tube there, call the control room and say, OK, turn a bunch of knobs, and switches, and whatnot. And it goes schwoonk, and in about 15 seconds it's next to the reactor to the core of the reactor in the graphite. I usually run shorts. I'll usually irradiate for about 10 minutes. We usually let the sample sit in the reactor for a little while. So the very short half-life stuff decays away, and then it comes back out here. And the thing just kind of shoots out there and bounces into here. And then pop open the rabbit, and in that hood, pull the samples out. I usually try to repackage the samples. So this is partly why I asked for stuff that's one or two good solid pieces. Because then I can take it out of whatever it was irradiated, put it in a clean bag or vial, and that way we don't have to do a blank subtraction for the sample. Does that make sense? Because, otherwise, if I take a little vial, irradiate it, and then count it, I'll also have whatever elements are in the vial on the thing. For when I'm running standards-- and this is when if we're not running standards you don't have to worry about this-- that powdered standard stuff, I never get that out of a bag. Because you'd never get all of it out, and I'd have contamination everywhere if I started cutting open those bags. So I do have to do a bag correction for those. So when I do when an irradiation, I always irradiate a few empty bags, and then you do a correction for those. Because the bags have got aluminum, and antimony, and a bunch of things in them. And so then I take a couple of samples, I throw them in a lead pig-- so I've got a whole bunch of these floating around-- and I run it down the hall, and throw it on a detector, and we count it. When we're doing shorts, I'll irradiate two samples at a time, because I have two detectors. When I used to have four detectors, I ran for samples at a time. So you irradiate it, repackage it, count it. While those pair of samples are counting, you come down here, you irradiate the next two, so that you're just kind of always irradiating and counting. I usually do a 10-minute irradiation for shorts. I'll do a fairly quick count-- five minutes-- right after I get the sample down there, and that's looking for stuff with half-lifes under 10 minutes. The shortest half-life I look for is for aluminum. It's 2 and 1/4 minutes. But things usually have a lot of aluminum in them, so I see aluminum pretty well. For shorts, I'll count all the way up to about sodium, which is almost 15 hour half-life. Longer stuff, I'll do a longer irradiation to count. There's a little overlap on my shorts and longs. That helps me do QA on things. And if I run two standards, I'll check the concentrations from one standard to the other. That's another little QA thing. What else we got? MICHAEL SHORT: What question do you guys have? MICHAEL AMES: Questions. MICHAEL SHORT: Now that you know how this done. MICHAEL AMES: It's pretty straightforward. MICHAEL SHORT: What sort of things are you going to be bringing in? MICHAEL AMES: Yeah, what do we got? AUDIENCE: Probably middle Bronze age pottery shirts. MICHAEL AMES: Oh. Yeah, yeah. OK. There is a lot of archeology that NAA got used for that a lot. I don't think we ever did it here. Fred Frey, who's a professor, retired now, from EAPs-- Earth, Atmospheric, and Planetary-- he did a lot of geological samples. And I forget where it was that they did all the archeology. One of the things NAA is really good for is rare earth elements, which are hard to measure by other methods. I can get very low limits on that. And by picking out various rare earths and the ratios, it can help identify where things are from in the world. MICHAEL SHORT: Yeah. AUDIENCE: Can I use a bird as a sample? MICHAEL AMES: If you give me a little, tiny piece of it. AUDIENCE: OK. MICHAEL AMES: I mean, you know-- I AUDIENCE: Like, how small [INAUDIBLE]?? MICHAEL AMES: Well, see, that's the rabbit. So it's definitely got to fit in there. AUDIENCE: OK. MICHAEL AMES: The thing I really like-- excuse me, where's my vials? I used to have some smaller ones up here. But that should definitely fit in one of those. Like, see that guy. AUDIENCE: OK MICHAEL AMES: My usual description of what size sample I like is if it's a piece that you would pick up with a pair of tweezers. So not too small to pick up-- to be able to find. So no powders. And you could maybe get it with your fingers. But 20 milligrams, 50 milligrams, 100 milligrams is just in the right ballpark. AUDIENCE: OK. MICHAEL SHORT: What else are you guys thinking of bringing? MICHAEL AMES: Doesn't matter. We'll look at what comes in, and-- yeah, I might veto some things or not. But we'll see. We'll see what we got. MICHAEL SHORT: OK. AUDIENCE: What are those little bricks for? MICHAEL AMES: Well, we got bricks everywhere. So when I get the sample out of there, I do the repackaging in here. And so this is just shielding between the samples I'm working on and myself. I don't have my dosimeter on now, but I usually have got the symmetry and a ring badge. And then it kind of comes over here, and this is where the heat sealer is. So I can heat seal it here, and then I'll have a pig over here. MICHAEL SHORT: They're just painted lead bricks? MICHAEL AMES: Yeah, these are just painted lead bricks. And you know, these have been here longer than I have. And sometimes things just are somewhere, and you never move them. These, I think, are older than me too. This lab has been doing NAA since the '70s, I think. Anybody else? AUDIENCE: Is there a single brick that I could just hold to see how heavy it is? MICHAEL AMES: The full size bricks-- like, that size, 2 inches, by 4 inches, by 8 inches, weighs about 25 pounds. There's usually a bunch of them floating around. Here, you want this game? That one's not quite full size. AUDIENCE: Wow. That's pretty heavy. MICHAEL AMES: They're heavy. They're lead. Anybody else want to toss it? No, OK. [LAUGHTER] When people ask me-- because I work in the reactor, as well-- they say, is there anything dangerous in the reactor? The dangerous thing is dropping lead bricks on your feet. So I've got steel toast. If I miss the toe, I'd probably break my-- I don't want to think about it. And they move much bigger things in the reactor. Have you toured the reactor yet? AUDIENCE: [INAUDIBLE] MICHAEL AMES: So there's that giant crane there, and they move five-ton pieces a shielding. And that's the other dangerous thing in there, dropping really big things. We've never dropped anything that big. I think somebody dropped a steel plate on their foot once. That was about the worst of it. [LAUGHTER] You know, like, four-foot, half-inch steel-- boom. MICHAEL SHORT: That's what happened to my foot. MICHAEL AMES: Yeah. OK, good. And people trip and fall off ladders. And it's the usual industrial accidents. AUDIENCE: [INAUDIBLE] cut off your toe. [INTERPOSING VOICES] AUDIENCE: Well, my toes are still here. MICHAEL AMES: Good. Yeah. I mean, I've broken a few, but not here. MICHAEL SHORT: So, cool. Thanks a ton, Mike. MICHAEL AMES: Sure. And I'll see you guys in a month or something and have fun running the reactor. FRANK WARMSLEY: Well, good day, folks. You guys are here to do an experiment on the reactor. It's in two parts. The first part is raising reactor power. The first is raising reactor power using a low worth absorber called a regulating rod. And then the second part will be lowering reactor power using a high worth absorber. And the high worth absorber, things will moved much faster. And we don't want to run into a chance if you accidentally going too high, so that's why we use a low worth absorber on the way up and a high worth absorber on the way down. And I just want to show you the controls. With me today is Tim. To actually do this experiment, we need two licensed people in here, one at least has a senior reactor operator. Both Tim and I are both senior licenses, so we have that covered. The only way you can actually do these manipulations are if you're in my training program-- I'm the training supervisor for the facility-- or you're in a program that needs you to actually operate the reactor. And the program you guys are in fits that definition. So I just want to show you some of the controls of the reactor. First, we have our shim blade controller. This basically moves one of six shim blades at a time. The one that's selected has a slide on it. And we can change which one's selected with the shim blade selector switch. This switch here is a regulating rod. This one will allow you to move the regulating rod up and down. Our blades our fixed speed, meaning they can only move at the exact same rate at all times. Moving the shim blade in an upward direction or the regulating rod in upwards direction, take an underhand grip and pull up or twist upwards until it stops. Moving it just a little bit doesn't move anything. You have to move all the way until it stops, and then the absorber will move in the outward direction. If you want the blade to stop just release it. It's spring-loaded and will go back to the neutral position and stop moving. If you want to drive something in the inward position, take an overhand grip and twist downwards, and that will drive the absorber in. Once again, let go. It'll snap back up and stop the motion of the blade or the regulating rod. The experiment we're doing is basically change reactor power by half a megawatt. And we're currently at 500 kilowatts we're going bring the reactor up to 1 megawatt and then bring it back down to 500 kilowatts. So before we can do this, you have to log into our log book as a trainee on console. We'll show you the proper way to make the entries. As you make those entries, you'll go ahead and then do the actual movement itself. Sp the first one is going to be using a regular rod to move the reactor power up. What's the reactor power? We have about nine different instruments that tell us what the reactor power is at all times. But the ones we're going to be paying attention to are channel seven 7 and channel 9. These two channels are what we used to basically tell us what the wrecked power is. Channel 7 is what we control our automatic control at. If you watch the regulating rod, you'll see it move up and down on its own. That's because it's changing power based on what it sees channel 7 is doing. So if channel 7 sees that the power level's going too low, it'll cause the regulating rod to drive outwards to increase the amount of neutrons making the reactor power go up. Channel 9 is a linear power channel, and it basically tells us what the power level is based on a chart that we create. So it's not showing you megawatts, or kilowatts, or anything like that, it's showing you a current coming from a chamber. And that current is then converted into megawatts and so forth. So right now, we're at 500 kilowatts, 8.5 microamps, on this channel. And that's 8.5 microamps equals 550 kilowatts. You're going to be bringing a record up to 1. Megawatt and since it's linear, it'll be double that-- so 17.1. Now, you want to be careful when you raise reactor power. So when you start to add power to the reactor by raising a regulating rod, you don't want to keep raising it until you reach your value, because you have to actually stop the power increase as well. So we have two rules that we have to follow-- one, at the power level we're at, we have period-- the reactor period. The reactor period is amount of time it takes reactor power to increase. At the power level we're at, we're not allowed to go shorter than a 100-second period. So here is one of three periods meters-- one here, one here, which is selectable between two different meters. So as you're pulling up the regulating rod, one of the things you have to watch is to make sure that the reactor period doesn't go shorter than a 100-second period. If it does, you have to stop pulling blades. The other thing we have to watch for is to make sure that the power level, channel 9, doesn't exceed where you're going to. Not only not exceed, but we also want to make sure that you can actually control the reactor. It's called feasibility of control. And what that means is when you get to about 80% of the power level you're going to-- since we're going up to 1 megawatt, that's about 800 kilowatts-- you want to be able to drive the absorber in and hold the absorber in. You'll drive the regulating rod inwards. And watch that channel nine value. It'll slow until it actually starts to go down again. Once it reaches that value and you see it going down, you now know that you could control the reactor and keep it from going away-- rack power increasing continuously. So what we're going to do is have you when you reach 80% of the power level you're going to, which happens to be 800 kilowatts, you're going to start increasing or lengthening the period by driving the absorber back in the regulating rod. And you'll keep holding it in until you see the number not only stop increasing, but actually go down a little bit. As soon as you see it going down a little bit and go of the regulating rod, You haven't stopped the power at this time, you've just decreased how fast it's going up. And then the power level will sill go up, but a much slower rate than it was before. And once it reaches the power level you want to stop at, the 1 megawatt, keep driving the regulating rod in to hold it at that power level. Once you're at that power level, you're going to make an entry in a log book that basically says you made it to the power level you're going to. And then we'll go down in power. So once again, you make an entry in a log book that says I'm going to lower ranked power to 500 kilowatts, and then this time we'll use a shim blade. The shim blade is worth a lot more than a regulating rod-- about 10 times the regulating rod, so things will happen much faster. So you'll be able to drive this in and reactor power will change much faster than before. Same thing-- as you get closer to the power level you start at, the 500 kilowatts, you don't want to undershoot and go too low. So right around 600 kilowatts or so, start driving or shim blade out to slow down how quickly the power level is going down. And once you get back to the place where you started it, we'll use a regulating rod to fine tune it to keep the reactor power where would want it to be. There'll be another logbook entry, and your time on the console will be completed. So with us today we actually have two MIT students who are actually in my training program, and they've actually done a lot of these manipulations already. AUDIENCE: Ladies first. FRANK WARMSLEY: Sarah. Let's go. AUDIENCE: [INAUDIBLE]. It's been so long since I've done one. FRANK WARMSLEY: I'll take that. So, normally, we sit and watch. If, at any time, you don't feel comfortable doing something, let us know. We'll ask you just to take your hands off the console, and we'll take care of doing whatever is necessary to keep the reactor safe. But be aware, we're a factor of 10 lower than where we would automatically scram at so. So it would be very difficult for you to get to someplace where it would cause a problem without us being able to stop it. I don't know if you want to move or anything, but the supervisor normally sits kind of right in your way so that they can keep eye on what's happening. AUDIENCE: Are we doing doing any announcing for these? FRANK WARMSLEY: You can go ahead and make the announcement that we're starting power manipulations, and then the last person will make an announcement that we're done with power manipulations. AUDIENCE: Commencing power manipulations. Commencing power manipulations. FRANK WARMSLEY: Right now, the reactor is on autocontrol. And when we do these manipulations, the reactor operator is going to take manual control. That'll cause an alarm to come in. And this will only happen for the first time. So one of the things she's going to do after she makes her logbook entry-- AUDIENCE: Are we filling this out? FRANK WARMSLEY: No, we'll do that at the end-- is she'll take manual control of the reactor, an alarm will come in on console, and she'll answer it. And that should be the only time you hear this alarm, because we'll leave it on manual control until the final participant has done their manipulations. AUDIENCE: All right. I hope to get to 1 megawatt at 17.11 [INAUDIBLE].. FRANK WARMSLEY: OK. AUDIENCE: [INAUDIBLE] [ELECTRONIC SOUND] FRANK WARMSLEY: Now, she's pulling out the red rod all the way. You see the red rod number going up. The period is getting shorter. It's no longer at infinity. It's getting closer to 100 second. And channel 7 and channel 9 are increasing in value. Another way you can see it is we have a display on the front the operator. Those three displays, two of them are just for evaluation only. We don't actually use those to control the reactor. They're based on a system that hasn't been approved yet. But we're testing them to see how well they work. So you can see that the power level on the far left is going up. The middle one is showing what the actual power level-- we started at 500 kilowatts. It's already up to 630 kilowatts and increasing. And the period that was at infinity is now around 160 seconds. So she's watching, and she sees the 800 kilowatt value here on channel 7 or channel 9, and she's started to driving the regulating ride. So she's slowing down how quick the power increases going. And you see the period lengthening. It's no longer at 150-160 seconds. It's going closer to infinity again. So she's proving that she could stop the reactor power if she continued driving in this regulating rod. AUDIENCE: [INAUDIBLE] back on auto? FRANK WARMSLEY: No. She's closing in on the 1 megawatt. One of the things the note is that when she started, the [INAUDIBLE] was around 0300, 0310, and she's almost right back to there. When you raise reactor power, you basically open up a valve and let more neutrons in. And when you get to the place where you want to be, you basically close that valve again. So you basically add reactivity and then stop that reactivity addition by bringing the absorbers back to about where they started from. AUDIENCE: We're at 17.1 FRANK WARMSLEY: We're at 1 megawatt? AUDIENCE: Yeah. FRANK WARMSLEY: Go ahead and make your log book entry. So once again, she has experience. She's been doing startups and power manipulations for a while. When the rest of you sit down here, we'll guide you through those-- the log book entries that she's making and so forth. AUDIENCE: The [INAUDIBLE]. FRANK WARMSLEY: 30.6. OK. AUDIENCE: Should [INAUDIBLE]? FRANK WARMSLEY: Yep. So one of the things that could change the reactor is xenon. It's a poison that builds into the reactor while we operate. Poison in that it absorbs neutrons not leading to fission. And it has two ways of being made and two ways of having it removed. One is direct from fission and the other is decay. That's the way it's produced. The way it goes away is basically absorbing a neutron and decaying to another isotope. AUDIENCE: [INAUDIBLE] half a megawatt at 8.56 microamps. FRANK WARMSLEY: OK. AUDIENCE: Use the same shim blade? FRANK WARMSLEY: Yep, use blade 6. And what happens is when we lower reactor power, the way we remove most of the xenon from burn up, basically the neutrons being absorbed by the fission process. The fact that we don't have the reactor at a very high power means that the amount of xenon in the core isn't being removed. So we actually start-- the power would actually want to go down on it's own. So you would have to do a lot of re-shims. And for a while, that's a very large amount of reactivity that has to be compensated for. For this experiment, though, we actually shut down the reactor yesterday and we started up early this morning. So it's not a big factor as it normally would be after doing one of these lowering reactor power. AUDIENCE: Do you want to have her do a re-shim now? Or do you want me to? FRANK WARMSLEY: No. I think we'll be able to get at least one more person. So once again, she's lowering reactor power. You can see on the period meter, she's at a negative period, and the reactor power is decreasing. She's almost at 500 kilowatts. She's driving the absorber out again to slow down how quickly the power level is going down. And when she's done, the shim blade will end up about at the same point where it started, the 13.42 inches out of the bottom of the core. AUDIENCE: It might not make it all the way back up to [INAUDIBLE]. FRANK WARMSLEY: It'll be close. Compensate with the reg rod if you need to. 30.8. OK. And that's the end of the exercise. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 24_Transients_Feedback_and_TimeDependent_Neutronics.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: So today is going to be the last day of neutron physics. As promised, we're going to talk about what happens as a function of time when you perturb the reactor, like you all did about a month ago. Did any of you guys notice the old-fashioned analog panel meter that said, reactor period, when you were doing your power manipulations? We're going to do that today. And you're going to explore that on the homework. So I'm arranging for all of your actual power manipulation traces to be sent to you. So each one, you'll have your own reactor data. You'll be able to describe the reactor period and see how well it fits our infinite medium single group equations, which it turns out is not very well. But that's OK, because you'll get to explain the differences. First, before we get into transients I wanted to talk a bit about criticality and perturbing it. So let's say we had our old single group kit criticality relation. And I'd like to analyze, just intuitively or mentally with you guys, a few different situations. Let's say we're talking about a light water reactor or a thermal reactor, like the MIT reactor, or pretty much all the reactors we have in this country. What sort of things could you do to perturb it? And how would that affect criticality? For example, let's say you shoved in a control rod. Let's take the simplest scenario. Control rods in. What would happen to each of the terms in the criticality condition? And then, what would happen to k effective? So let's just go one by one. Does nu ever change, ever? Actually, yeah, it does. Over time, you'll start-- that nu right there, remember, that's a nu bar, number of neutrons produced per fission. As you start to consume U238 add neutrons. And as you guys saw through a complicated chain of events on the exam, eventually make plutonium 239, which is a fissile fuel. The nu for 238 is actually different than the nu for 239. So I don't want to say that nu never changes. It's just that shoving the control rods into the reactor is not going to change nu. But it does change slowly over time as you build up plutonium. What about sigma fission? If this were a blended homogeneous reactor or a reactor in a blender, what would happen to sigma fission as you then shove in an absorbing material? Does it change? AUDIENCE: No. MICHAEL SHORT: You say, no. And I'm going to add here homogeneous. So in this case, remember if we define the average sigma fission as a sum-- I'll add bits to it-- of each material's volume fraction, or let's say atomic fraction, times each material's sigma fission, if we throw nu materials into the reactor, then this homogeneous sigma fission does change when we put materials in or take materials out. So you guys want to revise your idea? AUDIENCE: Yes. MICHAEL SHORT: Yes, thank you. There's only one other choice. Now the question is, by how much? If you put in a control rod where let's say the control rod's sigma fission would be equal to zero, but volume would be equal to small. Can't be any more specific than that. How much of an effect do you think you'll have on sigma fission? AUDIENCE: Small. MICHAEL SHORT: Very small. So let's say a little down arrow like that. What about sigma absorption? The volume is still small, but a control rod by definition sigma absorption equals huge. So what do you think? AUDIENCE: It's going to increase. MICHAEL SHORT: It's going to increase a little or a lot? AUDIENCE: A lot. MICHAEL SHORT: Quite a bit. Now let's look at the diffusion constant. And remember that the diffusion constant is 1 over 3 sigma total, minus the average cosine scattering angle sigma scattering. What do you think is going to happen to the neutron diffusion coefficient as you throw in an absorbing material? Something that's got an enormous absorption cross-section is also going to have an enormous total cross-section, because sigma total is sigma absorption plus sigma scattering. And sigma scattering doesn't change that much. But if sigma absorption goes up, sigma total goes up. If sigma total goes up, then what happens o the diffusion coefficient? AUDIENCE: Decrease. MICHAEL SHORT: Yep, it's got a decrease. And how does inserting a control rod change the geometry? AUDIENCE: It doesn't. MICHAEL SHORT: Very, very close. Yeah, you're right. The control rod better not change the geometry, but what I do want to remind you of is that this buckling term includes-- let's say, this was a one dimensional infinite slab Cartesian reactor. That little hot over there means we have some extrapolation distance. Remember, if we were to draw our infinite reactor with the thickness A and we wanted to draw a flux profile on top of that, it would have to be symmetric about the middle. And let's say we had our axis of this is x and this is flux. Flux can't go to zero right at the edge of the reactor, because that would mean that no neutrons were literally leaking out. So there's going to be some small extrapolation distance equal to about two times the diffusion coefficient. So the geometric buckling is actually pi over the reactor geometry, plus 2 times the diffusion coefficient. And if the diffusion coefficient goes down, but it's also very, very small compared to the geometric buckling, how much does the buckling change and by how much? And in what direction? AUDIENCE: It increases very slightly. MICHAEL SHORT: Increases very slightly. So the buckling might increase very slightly. What's the overall net effect on k effective? AUDIENCE: It goes down. MICHAEL SHORT: Should go down, you would hope. If you put a control rod in, it should make k effective go down, because there's a little decrease here. Things kind of cancel out there. But the big one is putting an absorption material, like a control rod in, should make k effective go down. And that's the most intuitive one, but you can work out one term at a time what's generally going to happen. So let's now look at some other scenarios for the same criticality condition. I'll just rewrite it so that we can mess it all up again. Now we want to go for the case of boil or void your coolant. And now we're getting into the concept of different feedback mechanisms. We've already talked once about how raising the temperature of something tends to increase cross-sections in certain ways. But now let's say, what would happen if you boil your coolant? If things got really hot and the water started to boil. What do you want to happen to k effective? You want it to increase? AUDIENCE: Decrease. MICHAEL SHORT: Decrease, thank you. You want it to decrease, or else you'd get a Chernobyl. And we'll talk about how that happened in a week or two. So now let's reason through each one of these. Let's assume that nu doesn't change when you boil the coolant. What about sigma fission of the whole reactor? You're taking a little bit of material out of the reactor by taking liquid water, which is fairly dense, and making it gaseous water, which is less dense. So overall, there are more fissile atoms in the reactor proportionately when the coolant is boiled away than when it's not. So what happens to sigma fission? The average sigma fission for the reactor will go up ever so slightly. Probably not enough to matter. What about sigma absorption? If the coolant disappears. AUDIENCE: It goes down. MICHAEL SHORT: Yeah, water is an absorber. Hydrogen and oxygen-- really just hydrogen-- have some pretty non-negligible absorption coefficients. And if those go away, then you're losing a bit of absorber, aren't you? Actually it's interesting. Oxygen is the lowest thermal cross-section of any element. So we can treat it as pretty much transparent. Now how about the diffusion coefficient? We've got the formula for it up there. If all of a sudden your neutrons don't have much to moderate from-- there's not much to moderate your neutrons. Yeah. AUDIENCE: Your scattering just disappears. MICHAEL SHORT: Your scattering just disappears, right? But so does some of your total cross-section. So chances are, those neutrons are going to go farther before they undergo any given collision because there's no water in the way. So you'd expect neutron diffusion to go up. And what about geometric buckling? Diffusion goes up, then the geometric buckling-- I'm just going to make it really small. But the net effect here, once again, k effective goes down. We didn't talk about anything to do with the actual temperature effects on the cross-sections. This is just a density thing on the coolant itself. So let's now look at that. What about if you have some power spike, raised fuel temperature? I'll write it again, so we can mess it up again. So let's say you raise the fuel temperature. And that's going to cause every cross-section effectively to increase if you're doing this average scenario. Let's talk a little bit about why. It's not as simple as just saying, the cross-sections go up. So let's say we had two different temperatures, cold and hot. So this would be your sigma fission cold. And this would be your sigma fission hot. For cold, sigma fission looks something like that. And as the temperature goes up, these resonances, which I'll just label right here-- resonances being specific energy is where the absorption suddenly goes up, suddenly goes down will actually decrease in height. But they'll start to spread out more. That's about as well as I can draw it very crudely. And same thing goes not just for sigma fission, but for sigma anything, including absorption, including total whatever you want. And so if your goal is to get your neutrons from the fast region where they're born into the thermal region where you get fission, broadening these cross-sections makes it more likely that if the neutron loses any amount of energy, it's going to hit one of these big resonance regions and get absorbed or taken away before it gets a chance to go to the fission region. So what this is really going to do-- it's kind of funny to say it in terms of a one group criticality relation, but your fission cross-section is actually going to go down. One reason is that the fuel physically spreads out. And so just from the density modification, you're not going to get as much. But then you've also got that effect of increasing fission from these resonance regions spreading out. The question is, which one is a bigger effect? Can't answer that with a simple statement. You'll go over a lot more of that in 22.05 when you talk about what actually defines a resonance region, how do you calculate them, and how do they Doppler broaden or broaden with temperature. How about sigma absorption? AUDIENCE: It goes down. MICHAEL SHORT: Yeah, sigma absorption, it's going to go down because things spread out. But it might also go up because the cross-sections spread out, or the resonances spread out. What's really going to happen though is the reactor atoms are effectively spreading themselves apart. The coolant's less dense. The structural materials in the fuel and everything are still there. They're less dense, but there's not fewer of them in the reactor. But there is going to be less coolant in the reactor, because it has the ability to sparsify or get less dense, and kind of squeeze out the inlet and outlet of the reactor. So what's really going to happen here is, we know diffusion is going to go up, which might cause a corresponding change in buckling. And the net effect, as we would hope, k effective would go down. And so what we've talked about now here is directly controlling reactivity with control rods, what's called a void coefficient, where you actually want to have a negative void coefficient. So if you boil your coolant too much, k effective should go down. And that's one of the mechanisms that a light water or a thermal reactor can help stabilize itself. And you can see that now from just a really simplified one group criticality relation. And if you raise the fuel temperature, let's say the fuel gets really hot because there's been some power spike, you also want the reactor to shut itself down, which you can see that it does. Let's make things a little trickier. Let's now talk about a sodium reactor. Fast reactor. This one relies a lot more on fast fission of U238. So if we were to draw the two cross-sections of 235 sigma fission and 238 sigma fission-- remember, uranium 235 looked like the one that we drew before, whereas U238 goes something like that, with no actual scale given. I'm not going to even go there. But uranium 238 does not need moderation for the neutrons to induce more fission. So let's now write the same criticality reaction, which, again, is a super simplified view of things, but that's OK. What would happen to each of these terms in a sodium fast reactor if you void the coolant? So nu won't change. What about sigma fission? Well, if the coolant goes away, then on average there is fissile materials contributing more to that cross-section, but not that much. So if you want to get technical, might be the slightest of increases, but doesn't matter that much. What really matters, though, is the stuff on the bottom. Sodium does have a low, but non-negligible absorption cross-section. So if the sodium were to boil away, then the absorption would go down by a non-negligible amount. And then what about diffusion? Well, we've got the formula for it up there. If there's not as much coolant in the way, then the neutrons are going to be able to get further on average. Let's say, they're not going to be scattering around with as much of the sodium. So there might be a small increase in diffusion and corresponding small increase in buckling. But this is where the one group kind of fails. What the sodium is actually doing is providing a little bit of moderation, so that some of those neutrons when they bounce off of sodium leave the fast fission region and get absorbed. And that's part of the balance of the reactor. If all of the neutrons are then born fast and don't really slow down and just get absorbed, then you might have an overall positive void coefficient. So this would tell you that in a fast reactor where you're depending on your coolant not just to cool the reactor, but to absorb somewhat and to moderate somewhat, you don't want to boil the coolant in a fast reactor. And is a lot of the reason why most fast reactor coolants tend to have extremely high boiling points. Sodium is approximately 893 Celsius. Lead bismuth is approximately 1,670 Celsius. Molten salt, about 1,400 Celsius. So all those coolant, except for the sodium one, you'll melt the steel that the reactor is made out of before your boil the coolant. So boiling the coolant is a bad day in a fast reactor, because then things will go from bad to worse, because in this case, the feedback coefficient can be positive for voiding the coolant. That's no good. So you want to keep the reactor submerged. And that's another reason why a lot of these fast reactors are what's called, pool-type reactors. The reactor is not a vessel with a bunch of piping under it that can break and fail, but instead it's designed as a huge pool of liquid sodium. And then the core is somewhere in here with a bunch of pumps sending the coolant in and back out, or through some heat exchange or something. So there's not really any penetrations on the bottom up this pool. And you make sure that you maintain, either when you have sodium or lead bismuth eutectic, or liquid lead, or some other fast reactor coolant. So these are some kind of interesting scenarios to think about. I think one of them that I put in the homework was imagine you have the MIT reactor and replace the coolant with molten sodium. What's going to happen? Well, let's say you got all the water out first and it wouldn't just blow up. What would actually happen to the criticality relation? That's something I want you to think about, because one of the big problems on the homework is doing exactly this for scenarios that have happened to the MIT reactor, except for the sodium one. That's never happened and hopefully never will. I can't even imagine. But now let's talk a little bit about when you perturb a reactor by doing something to it, putting the control rods in, or pulling them out, or doing whatever you want. You're by definition going to take one of our first assumptions about how the neutron diffusion equation works and throw it out the window. So we're now moving into the transient regime. So to study what happens in a reactor transient or when something changes as a function of time, let's first go from k effective to what we call k infinity. The multiplication factor for an infinite medium. We're only doing this because it's analytically easier to understand and still gets the point across. So we'll say that our k infinity is still a balance between production and destruction. The difference is if we have an infinite medium, there's no leakage. You can't leak out of an infinitely sized reactor, should one ever exist. And so it just comes out as nu sigma fission over sigma absorption. A much simpler form. And so now we can write what would happen to the flux in the reactor as a function of time. In this case, it's going to be one over velocity. I'm going to make this a very obvious wide v. That change in the reactor flux is going to just be proportional to the imbalance now in the number of neutrons produced and destroyed. So the number of neutrons produced will be proportional to very sharp nu sigma fission minus the number of neutrons destroyed, sigma absorption, times phi is a function of t. Y'all with me so far? So this right here is a change, which is proportional to an imbalance between production and destruction, times the actual flux that you have in some given time. So to make this simpler, let's multiply everything by v. Where's my green substitute color? Multiply everything by v. And the only unfortunate situation is we have a v and a nu next to each other. I'm going to try to keep them looking really different. Those go away. And then we end up with, if we divide by phi, then those phi's go away. And we have phi prime over phi, equals v nu sigma fission, minus v sigma absorption. And now we can start to define things in terms of our k infinity factor and a new quantity I'd like to introduce called the prompt lifetime. It's a measure of how long a given neutron tends to live before something happens to it. Before it's either absorbed or leaks out, well, not from our infinite reactor. And so we can define this as 1 over their neutron velocity, times sigma absorption. And just to check the units here-- velocities in meters per second. Macroscopic cross-sections are in 1 over meters. So those cancel out, and we're left with a total units of seconds. That's nice. We would want a mean neutron lifetime, or a prompt lifetime to have of seconds or time, at least. Yep. AUDIENCE: Can you say again why the [INAUDIBLE] squared went away? MICHAEL SHORT: Why the d phi dt squared went away? AUDIENCE: No, the [INAUDIBLE] square. MICHAEL SHORT: Oh, OK. So that's because we assume we're going to be analyzing an infinite medium. So right here, this to relabel these terms, this would be the total production term. That right there represents absorption. And that right there represents leakage. But if we're analyzing an infinite medium, you can't leak out, because it takes up the entire universe and beyond, depending on what you believe metaphysically. That's different costs. So this right here, we can rewrite as 1 over lifetime. That makes it easier. And this right here, if we note that nu-- that's a nu. I'm going to be really explicit about that. Nu sigma fission over sigma absorption. This kind of looks like this is looking to be like k-- wrong color. --like our k infinity over lp. So all of a sudden we of a much simpler relation. We have 5 prime over 5 equals k infinity minus 1 over the prompt neutron lifetime. So if we solve this, this is just an exponential. So we end up with our phi as a function of t is-- whatever flux we started at, like for your power in manipulations, it would be whatever the neutron flux was before you touch the control rod, times e to the t, or e to the that stuff, k infinity minus 1 over lp, times t, which we can rewrite as t over capital T. We're going to define this symbol as what's called the reactor period. What the reactor period actually says is how long before the flux increases by a factor of e. And so this is actually what that meter was measuring on the reactor. It's the reactor period or the time it would then take for the reactor's power to increase by a factor of e because it's an exponential. To tell you what these typical reactor periods tend to be for a thermal reactor, t is about 0.1 seconds corresponding to an average prompt neutron lifetime of 10 to the minus 4 seconds. Seems fast, doesn't it? Like, really fast. So the question I asked you guys is, why don't reactors just blow up? AUDIENCE: [INAUDIBLE]. MICHAEL SHORT: Yes, there is something we've neglected from here. It's like what Sarah said. And it deserves its own board. There is a fraction of delayed neutrons. We'll give that fraction the symbol, beta. And for a uranium 235, it equals about 0.0064. So there's less than a percent of all the neutrons coming out of a reactor have some delay to them, because they're not made directly from fission in the 10 to the minus 14 seconds that we talked about in the timeline. But they come out of radioactive decay processes with delayed lifetimes ranging from about 0.2 seconds to about 54 seconds. This is the whole reason why reactors don't just blow up. So you can actually make a reactor go super critical. But if the k effective is less than 1 plus beta, then the reactor is not what we call prompt super critical. And so the reason for that is, let's say you raise the reactor power by some amount and the k effective goes up to 1.005, there's still this fraction 0.0064 of the neutrons are not going to be released immediately. They're going to be released not in 10 to the minus 14 seconds, but in 10 to the 2 seconds. So a measly 15 orders of magnitude slower, meaning that there's actually some ability for this reactor to raise its power level. And these delayed neutrons, even though that's such a small fraction, takes the reactor period from its t infinity value of about 0.1 seconds to about 100 seconds. So the same reactor when you account for the delayed neutrons increases in power by a factor of e. And it takes it about 100 seconds, which means this is totally controllable. Now I have a question for you guys. Would you guys like me to derive this formula, or do you want to go into more of the intuitive implications of it? Because we can go either way. There is a formula that will tell you what the reactor period and time dependence will be. And you will hit it in 22.05 probably. I can't guarantee it because I'm not teaching it. Or we can talk a little bit more about some of the intuition behind delayed neutrons. So a bit of choose your own adventure. Math or intuition? AUDIENCE: Intuition. MICHAEL SHORT: Intuition. OK, that's fine. Good. So that was the derivation. I'll post that anyway, if you guys want to see. I think in the Yip reading it says, let's account for the delayed neutrons. Intuitively we find that the answer ends up being-- so I'll skip the derivation. And it comes out to phi naught e to the beta minus 1, times k minus 1 over lt, plus beta phi naught over beta minus 1k, minus 1, times 1 minus e to the beta minus 1k, minus 1 over l. OK, so left as an exercise to the reader-- AUDIENCE: That's intuitive. MICHAEL SHORT: Yeah, that's intuitive. But let's actually talk about how intuitive it is. I do want to give you the starting and the ending equation. And we will not go through the rest. Yeah, Charlie? AUDIENCE: Should we copy that down? MICHAEL SHORT: No, you shouldn't. I'm going to scan it for you guys. So don't bother copying it down. Let's talk about where it comes from. And the answer may astound you because we're going to bring right back the idea of series radioactive decay. So let's say you want to relate the change in the number in the neutron flux to a 1 minus-- I'm going to take a quick look at the original equation because I don't want to screw that up. That's the first page, and that's the one we want. Let's say we had some equations that looked something like this. Phi plus phi naught times beta. This is the original differential equation from whence it came. And the intuitive part that I want you to note is that the jump from changing k effective is moderated by this term right here, 1 minus beta. So that's the fraction of prompt neutrons, that as soon as you pull the control rod out, that's your instantaneous feedback. By instantaneous, I mean on the order of, like, 10 to the minus 4 seconds, or something that you can't really control. This right here represents the delayed fraction. This is as mathy as it's going to get because you've chosen intuition. I think you have chosen wisely. It's going to be a more fun. So what this represents right here is your kind of instant change, because whatever you change k effective to, it's going to be moderated by the prompt fraction, how long the neutrons tend to take to undergo that feedback. Yes, Sara? AUDIENCE: Was that the average? MICHAEL SHORT: The average what? AUDIENCE: Average neutron lifetime. MICHAEL SHORT: Yes, this is the average neutron lifetime. So let's define the average neutron lifetime as simply 1 minus beta times the prompt neutron lifetime, plus the beta times some delayed neutron lifetime. So what no book I've ever seen actually says, this is what's referred to as a Maxwell mixing model. It's just the simplest thing to say, oh, if you want to get the average of some variable, take the fraction of one species times its variable, plus the fraction of the other species, times its variable. Folks do the same thing with electrical resistivity, thermal conductivity, or any sort of other material property. And it is or isn't good in some situations. Like, if you had a piece of material made out of two different things-- let's say this had thermal conductivity k1, and it had thermal conductivity K2. Would a Maxwell mixing model be appropriate to describe the flow of heat across this thing? Probably not. But in the case of neutrons where they're flying about like crazy and their mean free path is much larger than the distance between atoms, this works great. So we can define this mean neutron lifetime and use that in this equation right here. So this term right here describes the instantaneous change. You pull the control rods out, and fraction 1 minus beta neutrons respond immediately. What about that fraction of neutrons? Those are being produced with a fraction beta depending on what the flux was before, because they're still waiting to decay from the old power level. Does anyone notice anything suspiciously familiar about the final form of this equation for flux? You've seen it before with a couple of constants changed around. What about the form of this differential equation? [INTERPOSING VOICES] It is exactly the same as series radioactive decay. So the horrible derivation I was going to do for you guys and we're not anymore is, use an integrating factor. You solve it in exactly the same way. You bring everything to one side of the equation. Find some factor mu, that makes this a product rule. Do a lot of algebra. And you end up with a very suspiciously similar looking equation. So it's exactly the same posing and solution as series radioactive decay, with the difference being, that's the constant in front of everything, instead of a bunch of lambdas and fluxes. So what this says here is that the flux as a function of time, this is the prompt feedback right here, which says that if-- let's graph it, since we're going intuitive. There's no room. Even those all boards are full. OK, here we go. If we graft time and flux right here, what that part right there says is that you're going to get some sort of instantaneous exponential feedback. But it's going to be moderated by this one minus exponential on top. So you're going to end up with a little bit of prompt feedback, this stuff right here. And then-- have to draw longer because it's going to take forever-- you'll have some delayed feedback, because you've got to wait 100 or so seconds, or whatever that new reactor period is, for the delayed neutrons to take effect. And that's the whole reason you could pull the control rods out at almost any speed you wanted and the reactor doesn't just explode. If you pull the control rods out fast enough, such that the change in k effective is greater than beta, then the reactor goes prompt super critical, which means you don't have any delayed neutrons slowing down the feedback. And you've kind of turned your reactor into a weapon. A very poor, terrible weapon, but a prompt super critical nuclear device, nonetheless. Did anybody pull out the control rods too fast and the controls took over for you? What about you guys in training? Did you ever do things when you watched the automatic control take over? No? AUDIENCE: It'll just take over. AUDIENCE: Yeah, it'll kick you off if you don't pay attention. MICHAEL SHORT: That's what I mean. The machine takes over and it will kick you off and stop responding to you. AUDIENCE: [INAUDIBLE] horrible noise and so we don't want that. It's more to avoid an annoying alarm. MICHAEL SHORT: I see. But the annoying alarm is to stop you from doing something like that, like, making the reactor go prompt super critical. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: OK, so that's what I would call the machine taking over. AUDIENCE: Oh, I see. AUDIENCE: It'll kick you on to manual, and then [INAUDIBLE] still don't do anything [INAUDIBLE].. MICHAEL SHORT: Yeah. So if your blood alcohol level is above beta and you try and, let's say, increase the reactor reactivity too much, it will then take over, insert a control rod, make a horrible noise, and say, go home, you're drunk. Something like that. OK, that makes sense to me. So what did your guys' reactor power traces look like? Did they look something like this, where there was an initial rise as you pulled the control right out? And then after you pulled the control rod out the power kept rising just a smidge, right? And what happened when you put the control rod back in? Let's say you put the control rod back in. You're going to get another prompt drop, not equal to the same prompt gain that you got, because now the reactor's at a different flux, and then some asymptotic feedback like that. And so this is why to those who don't understand neutron physics, reactor feedback is very non-intuitive. It's not a linear system. You can't just pull the control right out and change the power accordingly. This is why there's automated controls in systems to stop you in case, like I said, if your blood alcohol content's above beta, which is very low, by the way. Though you shouldn't be drinking on the job, especially at a nuclear reactor. Plus, you're all under 21, so what am I even saying? AUDIENCE: What is alcohol? MICHAEL SHORT: That's right. Good answer. What is alcohol? AUDIENCE: Is that going to be covered in [INAUDIBLE] MICHAEL SHORT: That'll be on the exam, yeah. AUDIENCE: What is alcohol? MICHAEL SHORT: Yeah, cool. So that's all I want to go into for the intuitive stuff. And it's about 5 of 5 of. So I'd like to stop here and see if you guys have any questions on neutron physics at a whole. Noting that we're going to take Thursday's class and turn into a recitation. So I would like all of you guys to look at the problem set, because it is posted. It is hard. Trust me. This one's a doozy. So I want to warn you guys because you've got seven days to work on it. But I want you to look at it so that we can start formulating strategies for the problems together on Thursday, because there are some tricks to it. You guys know me by now, right? There's always some sort of a trick. Like, do you have to integrate every energy to get the stopping power? No, you actually don't have to do any integrals at all. But you can if you want, and your answer will be more accurate and correct. It'll just take longer to get to. So there's a lot of diminishing returns on these problems sets. If you're willing to take an hour and think about how can I do this simpler and with fewer decimal points, you're probably onto something. And we'll work on those strategies together. AUDIENCE: Is this due next Monday as well? MICHAEL SHORT: Yes. So I posted it yesterday at around noon or whatever the Stellar site says. I'll also teach you guys explicitly how to use Janus. So we got a comment in from the anonymous feedback saying, we have to use a lot of software. Can we have some sort of tutorial for dummies? Well, you guys aren't dummies, but you still deserve a tutorial. So I will show you how to export the data you'll need from this problem set for Janus. So you can focus on the intuition and the physics and not get frustrated with getting data out of a computer. So any questions on anything from the neutron diffusion equation? Yeah, Luke. AUDIENCE: I'm not real clear on [INAUDIBLE] neutrons are and how those are different from the prompt neutrons? MICHAEL SHORT: The prompt neutrons come right out of fission. If we looked at that timeline of, let's say the fission event happens here. Two fission products are released in about 10 to the minus 14 seconds. They move a little further apart. And then some of them just boil off neutrons, because they're so neutron heavy, after around 10 to the minus 13 seconds or so. These right here are prop neutrons, coming directly from the immediate decay of neutron rich fission products. Some of the delayed neutrons come from radioactive decay, but of the much later fission products with much less likely occurrences, which is why the fraction is very low. But also, because it's much longer half life, those delayed neutrons take seconds, instead of pico seconds, to show up. And that's the whole basis behind easier control and feedback a reactor. Good question. So anything starting from neutron transport to simplifying to neutron diffusion, to getting to this criticality condition, making the two group criticality condition if you want to have fast and thermal, or any of the time dependent stuff that we intuited today. Yeah? AUDIENCE: So for that cross-section [INAUDIBLE] you have there, so you have one for 235 and one for 238. 235, it has to be thermal neutrons [INAUDIBLE] fast? MICHAEL SHORT: Yep. AUDIENCE: And you said that [INAUDIBLE] different [INAUDIBLE] as well it had-- if you have [INAUDIBLE] into the-- MICHAEL SHORT: Was it on the other board or from a different day? AUDIENCE: It was a different board. MICHAEL SHORT: OK. AUDIENCE: Yeah, so could you explain that graph? MICHAEL SHORT: Yes. So in this case-- let me get a finer chalk. This blue one would be for low temperature, and this red one would be for high temperature. So this blue graph, there are resonances, which have very high values, but they're very narrow. And because the width of a resonance doesn't matter, it doesn't affect the probability that a neutron scatters up here and moves some distance down the energy spectrum. Thinner resonances tend to get passed over, especially if your reactor's full of hydrogen. Some of those neutrons will be born and immediately jump into the thermal region, where it's easy to tell how much fission they'll undergo. As you go up in temperature, you undergo what's called Doppler broadening, which causes these resonances to spread out and also go down in value. So the actual value of the cross-section at these residences is lower, but the widths are larger. So there's a higher probability that a neutron scattering around and losing energy will hit one of these higher cross-section regions, called a resonance, at a higher temperature. That's the difference there, is these two plots show the same cross-section at low and high temperature. These plots show the difference between uranium 235 and uranium 238. Good question. Anyone else? Cool. OK, for the first time in history, I'll let you out a minute early. Bring all your questions on Thursday. So we'll start off with a Janus tutorial. And then we'll start attacking this problem set together. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 27_Nuclear_Materials_Radiation_Damage_and_Effects_in_Matter.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So I got much more than one request to do some stuff on nuclear materials, and I think it's just about the right time. That you guys know enough about radiation interacting with matter and everything, and stopping power, and processing, to actually make sense of nuclear materials and radiation damage. And this is my whole theme, so happy to come talk to you guys about this and show you why I think it's interesting. Because it all goes-- this slide kind of gets onto it. It starts off with the single-atom atomic defects that make up the basic building blocks of damage and ends up with things that break in nuclear reactors under radiation. And so to understand the whole thing, you've got to know everything from the single atoms on the sort of femtosecond scale, all the way up to the engineering scale where things evolve over years or even decades. So we'll be talking-- first, probably today, we're going to go over a material science primer. So who here has had any courses in material science? No one. That's good because I'm assuming that there is a-- see, no one knows anything here. I know there's a couple material scientists in the class, and I'll apologize ahead of time if it's a bit of a review. But we'll be going mostly through what are materials and what are the defects that change their material properties, and how do they behave. That'll take us through about today. So then tomorrow, we can see how radiation causes those defects and actually changes material properties. So there's a whole laundry list of different ways that materials fail, and most folks are concerned with all of these-- everything from simple overload, which means you stress something too much and it just breaks, to all the different forms of corrosion. That's a whole field in itself. And then there's the things that just we have to worry about because they're only activated with radiation damage. And in this case, this isn't quite ionization by radiation, but it's actual radiation slamming into nuclei and moving atoms out of their place. And we've got one figure that we had recently in a paper that sums up the entire multi-scale picture of radiation damage, from the femtosecond to, let's say, the megasecond scale. Or I think it's more than that. Maybe gigasecond would be the right word for that. And all the way down from the angstrom to the meter scale. And I want to walk you through sort of a lens scale by lens scale depiction of radiation damage. It all starts with knocking atoms out of place. We've mentioned this a little bit when we talked about nuclear stopping power, and this is where it actually comes into play. Sometimes an incoming neutron or photon or ion can displace an atom from its original site, and we call that a physical-- it's a displacement. And then that atom comes off with quite a bit of kinetic energy and can knock into a whole bunch of other atoms. Now this loss of the solid crystalline structure, you can't really tell what the original structure looked like, right? It actually comprises a very small, localized zone of melting called a thermal spike. If you think about, all these atoms are vibrating at fractions of an eV-- at thermal energies, like the thermal neutrons we talked about in the reactor. Then you hit them with an MeV neutron. They might transfer 100 keV of energy. And a bunch of these atoms will then be moving about at, let's say, a few hundred eV. That's way beyond liquid temperature. So actually, it's been theorized that there's a little pocket of atoms around three to five nanometers wide that reaches, like, 10,000 Kelvin for a very, very short amount of time-- less than a picosecond. Because almost instantly, those atoms knock into the ones around them, and this is how the process of heat transfer occurs. And so, very quickly, you get what's called the quench, where most of those atoms very quickly knock into other ones, slowing down, finding their equilibrium positions again, but not every one. You can see there's a few places where the atoms are still out of their original location. And it's those residual defects that actually comprise radiation damage. And as those defects build up, they start to move. They can diffuse. They can be transported ballistically by more radiation damage. They can move by all sorts of different mechanisms and eventually find each other, forming what's called clusters. So a bunch of those missing atoms could find each other and make a hole, which we call a void. A bunch of the extra atoms shoved in between the other ones can form things called interstitial clusters. We say interstitial because it's like in the space in between where you'd normally find some atoms. So let's say you had a whole bunch of those missing atoms come together, forming a void. This is an actual Transmission Electron Microscope, or TEM, image of a void-- pockets of vacuum in materials. Notice anything interesting about its shape? AUDIENCE: It's, like, rounded. PROFESSOR: It's rounded, but what's most striking to me is it isn't actually round. So you would expect a void or a bubble to be kind of spherical, right? That's the minimum energy configuration of most things. Not so when you have a little pocket of vacuum. It's where crystallinity comes into play. And these voids can end up forming superstructures. What curious thing do you notice here? For this whole ensemble of voids. Yeah? AUDIENCE: It seems like they're all in line. PROFESSOR: They are all in the same direction. Kind of funny. That's definitely not an accident, right? That's not like they're randomly aligned. There's a reason for this, that we'll go into in a couple slides. Yeah? AUDIENCE: What's the size scale here? PROFESSOR: The size scale? I think these are on the order of 20 nanometers or so. Yeah, I cropped these images just to get points across. Let's see if it says in the older one. Not quite. Yeah, but these voids can get upwards of tens of nanometers. As small as single atoms. Yeah? AUDIENCE: Sorry, what is this? PROFESSOR: This is the accumulation of radiation defects into what's called voids. Yeah. Don't worry, we'll go over it in more detail again. And if you get little pockets of vacuum in your material, you're not creating or destroying mass. You're just moving it. So those voids, where that mass was has to go somewhere else, and you actually get things that swell in the reactor on their own. They don't change mass but they change volume. They just kind of puff up like Swiss cheese, sometimes upwards of 20% or 30% changes in diameter and length for some tubing. Now if you're depending on these fuel rods being a certain space apart in a reactor and they start to swell, squeezing out the coolant, you lose the ability to cool the reactor. Because then how can you get water around something where the tubes have then swelled together? There's lots of other bad things that can happen, which we'll get into. And so then that's the origin of void swelling. From single missing atoms called vacancies, they can cluster into voids which then cause physical dimensional changes of materials on the scale of centimeters to meters. And that's why we say it's this full multi-scale picture of radiation damage. But to understand, what is damage, you have to know what is an undamaged structure to begin with. So it doesn't make sense to say, how does a structure change, if you don't know how it behaves. So I want to give a very quick primer to material science. And apologies to any material scientists in the room because this is going to seem really basic, but this is a very quick intro to this whole field. I want to go over quickly, what is a crystalline solid? A perfectly undamaged material would be a set of atoms lined up in a very regular lattice and of regular array, where you move over a certain distance and you find another atom. And this extends forever and ever and ever, all the way out to when you reach the free surface. And so this is what we would call an undamaged material. A pristine, perfect, single crystal. By crystal, I mean an arrangement of atoms in a certain direction. So notice here, all of the atoms are lined up in, let's say, some cubic xyz way. That's what we would call one crystal or one grain. You'll hear both of those. And you'll notice also that the arrangement of the atoms tends to determine what the physical objects look like. Or we like to say that form follows structure in material science. So for materials like pyrite, which follows a simple cubic structure, that's the crystals you pull out of the ground. They mimic their atomic configurations in physical centimeter-sized space. For gold atoms, they adopt a slightly different structure. It's still cubic but there is atoms shoved into the cube faces. It's what we call Face-Centered Cubic, or FCC. And you start to see cube-looking structures all over single crystals of gold. Another one, gypsum. It's got a very different type of structure called monoclinic, where none of the sides of this parallelogram are the same and there are some funny angles. But if you look at the arrangement of the atoms and the actual crystals of gypsum that grow, you see a striking similarity, which I find pretty neat. I also want to mention, what is the absence of structure in material science? We call that something that's amorphous. Amorphous means without form. So for example, crystalline indium phosphide would have this regular structure like this. You move over a certain distance, you see another green atom, and so on and so on and so on. In an amorphous material, it can still be a solid, but there is no fixed distance between any certain types of atoms. And radiation can cause a lot of this amorphization by knocking the atoms about and having them freeze in random configurations. This is one of the ways that radiation damage can embrittle materials because-- well, we'll get into that. So now let's talk about the defects that can be created in a perfect crystal. The simplest ones, we call point defects. They're zero-dimensional because they're just single atoms out of place. You can have what's called a vacancy, where if you had, let's say, a face-centered cubic lattice of atoms, where you have atoms on every cube corner and every face, if you just pull one out somewhere, we refer to that as a vacancy. A missing atom. It had to go somewhere, though, and we'll get to where it is in just a second. So it might be kind of hard to conceptualize, how do we know that there are missing atoms in all these little cubes or lattices? We do have direct evidence. They're what's called quenching studies, where you can measure the resistance or resistivity of a piece of material after heating it to a certain temperature. Because it turns out that the hotter you make something, the more of those vacancies just naturally occur. You won't actually ever find an absolutely perfect single crystal anywhere in nature, unless you go to zero Kelvin for infinite time, then the atoms arrange themselves thusly. There's always some amount of atomic vibration going on. And there's actually some thermodynamic energy gain to having a few defects in your structure. And that number of defects increases with increasing temperature. Once you get to the melting point of a material, or like right before something melts, you can have up to 1 in 10,000 atoms just missing. Moved somewhere else. We call that the thermal equilibrium vacancy concentration. And we can measure that using these resistivity measurements, where you heat materials up to higher and higher temperatures, cool them down suddenly in, like, liquid nitrogen or liquid helium, and measure the change in resistivity. The more defects there are, the harder it is for electrons to flow through. And the only thing that could really be responsible there in a single element would be vacancies. So we do know that these really exist. They can also cluster up. It turns out that every time you have a vacancy in a material, the other atoms move in a little bit towards it, relaxing the pressure they feel from the atoms nearby. And one way for a whole bunch of vacancies to lower the stress of the whole atomic configuration is to cluster together. So if you have a whole bunch of vacancies, they may not allow as much stress accommodation as if they were separate, when they're together. Now you might ask, what happened to the original atoms? You can't just take atoms away and then go nowhere because you can't just destroy matter, right? Unless you turn it into energy, which is what we do in nuclear engineering. So in the material science world, they end up as what's called interstitials, where you kind of have a vacancy created from somewhere that knocks that atom out, and it gets stuck in the next biggest space between some other atoms. And we refer to those as interstitials. And those can cluster up, too, to reduce their total stress in the lattice. They can cluster up into what's called split dumbbell interstitials. Instead of having one extra atom shoved in here, you might rearrange a couple so there's two atoms in the center of a cube instead of one. And that tends to be a lower energy or a more stable configuration. So let's look a little bit at the energetics of these point defects because understanding how they move and why will tell us a lot about how radiation damage happens. So it turns out that interstitials are very hard to make. It's really hard to shove an atom where it doesn't want to be. But once you get it there, it moves very easily. Let's draw a quick, simple cubic lattice to do a little thought experiment and explore why that might be. Let's say I want to shove an interstitial atom in here between these other atoms. Well their electron clouds are going to repel, and it's going to push all the nearby atoms away by just a little bit. And these ones might push the other atoms away by just a little bit, stretching out the lattice, or adding some compressive stress wherever that interstitial is. But then how would it move? What's the biggest barrier it has to overcome to get to the next adjacent location? Well, which direction would it go? Would it go this way? Probably not. There's an atom in the way. So it's going to find the path of least resistance to try to get over here, because like we've talked about before, all atoms are always in motion. Vibrating. Some of them will be energetic enough to squeeze through these two atoms and get over to the next site. And that turns out to be a pretty easy process. We can look at the energy required for an interstitial to move. We notice it's really small fractions of an electron volt, whereas creating them takes two or three electron volts. In atomic land, that's a very high energy penalty. Now let's look at vacancies. They're quite the opposite. They're rather easy to make but they're very hard to move, compared to interstitials. Notice that the energy of movement is about the same as the energy of formation for vacancies. To take an atom out or to pluck it out, you have to break every bond between nearby atoms. So you actually have to put energy in to break those bonds and then remove the atoms somewhere else. Now these things are usually made in pairs, so if you think about how much energy would it take to cause a single radiation damage event where you have one vacancy, which let's say would have been right here, and one interstitial, it takes the sum of these two energies-- usually about four electron volts. That's not something that tends to happen chemically or from stress or from something like that. But radiation coming in with hundreds of keV or even MeV neutrons, anything's on the table because it's high enough energy. Yeah? AUDIENCE: What would take about three or four eV? PROFESSOR: So it would take about three or four eV to make a pair of a vacancy and an interstitial. If you just add these two up. It comes usually to about three or four eV, or electron volts. And that's a very difficult thing to do in sort of chemical world, where reactions might proceed with fractions of an electron volt. But when you have MeV neutrons coming in, they do whatever they want. They'll do whatever they will. So someone actually asked me yesterday, what sort of materials can you put in the way of neutrons to stop them from doing damage? And the answer is, pretty much nothing. Fast neutrons tend to travel about 10 centimeters, even in things like steel or water, and they're going to hit what they're going to hit. There's not much you can do but put more things in the way. And we can only get to a certain density with regular matter. And I think osmium has upwards of, like, 22 grams per cubic centimeter density. That's not enough to stop neutrons, even over a considerable distance. Unless you had, like, liquid neutron star, that you could pack nuclei in at a way higher number density, not much you can do. So moving up in the dimensions, there's another type of defect called a dislocation, where it's actually energetically favorable to slide an extra half-plane of atoms in between two sets in here in the crystal lattice, creating a sort of bulged-out structure like you see right here. And dislocations are one of the most important defects in material science and radiation damage. They're what I like to call the agents of plasticity. If you deform a material enough that it doesn't just spring back, then most likely, you were creating and moving dislocations in the material. If you think about a couple of different ways to cause deformation-- let's bring our perfect lattice back without all these extra notations. If you want to slide or shear two planes of atoms across, and they're all bonded to each other, what do you physically have to do? How can you get these atoms to slide across each other? What sort of energy do you have to put into it? Yeah? AUDIENCE: [INAUDIBLE] energy. PROFESSOR: Yep. Because all these atoms are bonded to each other, if you want them to move, you have to break every bond on that plane. That's a lot of atomic bonds to break and it's extremely unlikely that that would happen. In fact, if you broke an entire plane of bonds in some material like this, what would you physically do to it? You'd snap it in half. That would be fracture. So if you broke every bond down this plane, you would then have two pieces of this fuel rod. That's usually a pretty high-energy thing to try to do. So instead, if you shove an extra half plane of atoms in there, and the bonds are kind of funny like so, right at that extra half-plane location, then what you can actually do is break one. Let's say you break this one, form the next one, then break this one and form the next one. And for a few atoms to move over, you only have to break a line of bonds, not a plane. So it's much less energy-intensive to get a dislocation to move than to just break something in half. Now you might ask, well, then why do things actually break? Whether or not things deform or break is a balance between this process, which we call slip, and breaking an entire plane of atoms, which we call fracture. So this one's called slip. The other mode is fracture. We would rather materials to form in systems like reactors by slip, just moving a little bit, then just breaking altogether. Unfortunately, when enough radiation hits materials, you can fracture things in a brutal manner, and we'll see what happens then. There's a couple kinds of dislocations. One of them is called a screw dislocation. So imagine you had a whole bunch of sheets of atoms, and you made a cut halfway through that sheet and then moved every plane up by one position. You then got what's called a screw dislocation-- kind of a spiral parking garage of atoms surrounding that core right there. You can also have what's called an edge dislocation, which is like the one I've got here on the board right here, where you just have an extra half plane of atoms shoved in right there. So there's two types, and they move in two different ways. The edge dislocation behaves like you may physically expect. If you kind of push like this on two planes of atoms, it moves in the direction you push it. Screw dislocations are kind of screwy. If you push like this, it moves perpendicular. Not going to get into why, but just remember, screw dislocations are fairly screwy in the way that they behave. Not quite intuitive. But that's OK. We don't have to worry about those. And the way that they actually move, like we showed right here, is by what's called glide, or slip, where dislocations can slide just by one plane of atoms or one atomic position in a mechanism that looks something like this. Where, as that dislocation moves, you only have to break a line of bonds and then reform a line of bonds, which is a much easier process than breaking an entire plane at once. It's like you have to break the square root of the same number of bonds. I'm going to skip ahead through some of that. There's one other mechanism of dislocation movement that's important to us in radiation damage and that's called climb. This is when you start to think about, what happens if you have a dislocation, which we'll give this symbol right here, and you also have a vacancy, let's say created by radiation damage. If that vacancy can move, it's going to find the most stressed-out part of this lattice. Most likely, the vacancy will move here. In other words, the atom will move over there, leaving this vacancy over there. It's kind of funny to think, like, what does it mean that a vacancy moves? Has anyone ever done anything with semiconductors and talked about electron and hole movement? OK, yeah. So what does it really mean for a hole to move, right? A hole's not a thing. A vacancy's also not a thing. It's an absence of an atom. But here, we can say that the vacancy moves in this direction when the corresponding atom moves in the exact opposite direction. And then what you've actually done is moved your dislocation up. Instead of moving in the slip direction, you've now moved it in a perpendicular direction. This is usually not possible without things like radiation damage or very high temperature. And then, to make things even crazier, you can also have what's called loops of dislocation, some videos of which I'll actually get to show you. You can have a dislocation that has part edge character, part screw character. If you look at how the atoms are arranged here, you're looking from sort of the top-down. You can see that there's an extra half plane of these white atoms shoved in in the black ones, and this right here would be a completely edge dislocation. You can have a gradual transition, where about 90 degrees later, it looks like a spiral and that's a screw dislocation. And the net effect of that is when you push in this direction on an edge dislocation, it moves that way. When you push this direction on a screw dislocation, it moves that way. So when you stress out a dislocation loop, it just grows. You're not actually creating or destroying matter, but what you're doing is causing this small loop of extra half plane of atoms to grow further and further until it actually reaches some obstacle or the outside of a crystal. And these dislocations can actually feel the force from each other. If I draw a clean one because I think it'll be easier to see-- if I draw a small lattice of atoms here and then a dislocation core right there. So that's our dislocation core. This region of space right here is compressively stressed. There's more atoms in that space than there want to be and so it's kind of crammed in there. While this region right here is in what's called tensile stress. There's almost some space, like right here, where there's too few atoms and they kind of want there to be more. And these dislocations can feel neighboring stress fields. Let's say there was another one right over here that had its own compressive stress field. They'll actually repel each other because you don't want to add even more compressive stress to anywhere in this group of atoms. So they'll actually repel each other to the point where, if you get two dislocations too close to each other, they'll what's called pile-up, or they'll refuse to move a bit. So I want to show you some videos. We can actually see these dislocations. In this one, you see that faint line right there originating from this area? That's actually a dislocation loop under stress and that's actually growing. So what you're seeing here is an image of electrons passing through material and looking at regions of different contrast. So wherever there is more atoms or fewer atoms, it looks darker or lighter, and that can tell you what sort of defects there are. You guys all see that faint line right there? Notice how the loop's just growing. It's not like you're moving a line, but you're literally growing a line out of what looks like nothing. There's another one we call a Frank-Read source. It's a source of dislocation loop. So what you're seeing here, each of these lines is a single dislocation. And then right there, you see that loop suddenly form? Let's show you that one again. I'll point on where to look. By stressing out materials, you can actually create additional dislocation loops, right around here. And there it is. You guys see that one? Yeah. Out of what looks like nothing but is actually just a couple of atomic defects, you can create a dislocation loop and allow more plastic deformation to take place, which I think is awesome. Look at this one. Another dislocation source in germanium. It's a little easier to see, also because it's making this sort of spiral set of dislocations a little slower. So you can track its motion a little easier. Notice how they all kind of line up on certain atomic planes. Yeah? AUDIENCE: Does the topology of these things ever change, or is it always just a slow [INAUDIBLE] PROFESSOR: The topology will change. Let's say, if it hits another obstacle or another dislocation, yeah, they can slam into each other and change topology. AUDIENCE: Breaking too [INAUDIBLE] PROFESSOR: All sorts of things, yeah. That's a subject for a whole other class, I'd say. I want to skip ahead to the pile-up because I think this kind of gets the point across. But actually, we can see direct evidence that dislocations feel each other's stress fields. When you get enough of them lined up, they won't overlap. They actually push each other in a kind of dislocation traffic jam. Because what's happening on the atomic level is, they feel each other's stress fields. There might be a source of dislocations further away, but when they get too close to each other, it literally is a dislocation traffic jam. I mean, if you try and hit the car in front of you, the repulsion of the electrons between your and their bumper will prevent the cars from getting a certain distance closer to each other. Same kind of thing here. Moving onto grain boundaries, a two-dimensional defect. Any time you have a perfect crystal of atoms that meets another perfect crystal at a different orientation, or where the atoms are arranged in a different direction, you end up with a boundary between them that we refer to as a grain boundary. So you can actually see, this is a direct physical image of atoms of two different crystals meaning at the grain boundary. Again, taken in the transmission electron microscope. So for those who didn't know, yes, we can see individual atoms and the defects between them. I definitely didn't know that in high school. They didn't even mention that whatsoever. Did you guys ever see images like this? Anyone? Yes? Raise your hand. Just one, OK. So yeah. It's important for you guys to know that we can have direct evidence for all this blackboard stuff because you can see atoms in the transmission electron microscope and see what happens when the two of them meet. You see this kind of regular structure of empty space where this grain boundary meets, right? You can actually model it as a line of 1-D dislocations, because if you take a line of 1-D lines, you end up with a 2-D boundary, which you can see very clearly here. It's almost like there's an extra half plane right there. Another one there, another one there, and another one there. And we call that a tilt grain boundary. Grain boundaries are nice in that they can accommodate lots of these little zero-dimensional defects, moving to them without getting destroyed. So grain boundaries are one of those ways that radiation damage can be removed. And that's one of the reasons why most small-grain materials are really-- nano-grain materials are more resistant to radiation damage than large-grain ones because they act as what's called sinks or destroyers of radiation damage. There's another kind of 2-D defect called a twin, where you can actually get a little chunk of atoms sort of switch orientation. And you can see these very clearly in, again, TEM micrographs, and the evidence actually that the twin actually is a different physical arrangement of atoms, even though you can't see the atoms in this little band right there. Look at the way the dislocations line up. Those dislocations tend to line up in energetically-favorable directions, and in this grain, they're all this way, and in the twin, they're all lined up like that. And then finally, there's the most intuitive defect, inclusions. A 3-D piece of some other material inside what would otherwise be a pure material. This one, I actually pulled out of the rotor that powers the Alcator fusion reactor. I was asked to do some analysis to find out, is the structure of that rotor changing, because General Electric who was insuring this rotor said, we don't want to insure it anymore. Thanks for the premiums, but we're not insuring it anymore. And we said, why? And they said, oh, it's structurally unsound. So we said, oh yeah? We'll be back in a year and we'll talk about it. And we did a lot of this work to find out that, actually, the structure hadn't really changed since 1954 when it was made. But what we did also see is we could pop out little precipitates of manganese sulfide. So there's always sulfur in iron, and sulfur tends to be a bad actor when it comes to material properties. You throw manganese into iron to scoop up that sulfur in the form of these little precipitates or inclusions, which we were able to see perfectly when we did an x-ray map, just like the one we did after the first exam. It's like we were looking at Chris' copper silver alloy, mapping out where is the copper and silver. I made this image the same way, mapping out, where is there iron, manganese and sulfur. That's how you can tell what it is. And so dislocations and defects can actually interact. Let's say this is the interaction of a 1-D defect, a dislocation, with a 3-D defect, a void. If you have a material that's deforming plastically, very smoothly, and isn't going to undergo fracture, you want the dislocations to be able to move. If you put anything in their way, they tend to get stuck. It's not easy for that dislocation to shear through a whole bunch of extra atoms. And in some cases, you can stop that motion and favor fracture over slip. So any time you make slip harder, it means that you're making fracture more likely. I didn't say you're making it easier, but you're making it more likely. And you would prefer for materials to deform a little bit by a slip than just break by fracture. So I think now is a good point to go over a few key material properties. All of these are sometimes used to describe the same thing in colloquial speech. That is wrong. Has anyone here thought that, let's say, stiffness or toughness or strength meant the same thing? No. OK. A few people. It's OK. Because it's used wrong all the time in colloquial speech. These actually refer to different material properties with different units. And we're going to go into a little bit about what they are and then show you a few videos to test your intuition about the differences between them. So first, I want to mention what you're seeing right here. It's called a stress-strain curve. Stress is simple. Stress is just a force divided by an area. And usually, the criterion for will a material deform or will it break is does it reach a certain stress. It doesn't matter just how much force you put on it, but it's like, how much force per atom or how much force per area determines whether bonds are going to break. And so on the y-axis is stress. Let's say the amount of force per area we're putting in. And strain is the amount of deformation. So that's stress. And strain is, let's say, the change in length over the original length of some material in what's called the engineering or simplified notation. And so something that is stiff means you can put a lot of force into it but it won't deform very much. That's kind of the easiest property to understand. Is something that's very stiff will have what's called a high Young's modulus, or a high slope right here. Something that's super stiff, like a ceramic, you could really push on it quite a bit, but you won't get it to deform like you would this metal. So the opposite of stiff, I would call compliant. Not soft. This is one of those tricky things right there. Something that's stiff, you try and flex it and it won't flex. Something that's compliant, you put a little bit of force into it and it undergoes some amount of strain. And that slope right there between the stress and the strain, we call the Young's modulus. We also note that this part right here is what's called the elastic region of deformation. By elastic, we mean reversible, or it snaps right back. So right here, when I bend this bar and it snaps right back, that's called elastic deformation. And it's reversible, because you can bend one way and it snaps right back. If I bent it more, which I don't want to do because this is a nice zirconium fuel cladding rod, you would deform it irreversibly. You'd bend it permanently. And to undergo what's called plastic deformation, when you deviate from the slope, and then a little bit more stress can cause a lot more deformation. Have any of you guys ever tried pulling copper wire apart before? That's something I'd recommend you try, for thin wire so you don't cut your hands. What you may notice is that it's awfully hard to get the copper deforming in the first place, but as soon as it starts to stretch, it gets really easy. So this is something I recommend. Go to the electronics shop or wherever and try it out on some really thin copper wire. If it's thick, you'll slice through your fingers and you don't want to do that. Strength, however, that's a different metric. Whereas stiffness describes the slope here, strength describes the height, or the stress at which you start to plastically deform. They're in different units. Stiffness is in stress over strain, whereas strength is given as a stress. So when you hear things like the yield stress or the ultimate tensile strength, that's referring to how strong something is, which may have nothing to do with how stiff it is. Toughness is another property. Toughness is actually kind of like the area under this curve, because if you do a force and apply it over a distance, that's like putting work into the material and it ends up being a unit of energy. So toughness will tell you how much energy you have to put into something before creating a new free surface, otherwise known as fracture. And ductility is how much can you deform it before it breaks. So it would be like this point right here on the strain axis. So I'll give a little bit more examples of what this is all about. Toughness, again, is actually measured as an energy required to form a free surface, or propagate a crack, let's say. Whereas something that's ductile, it doesn't necessarily mean that it's tough. Like, if you have a piece of chewed chewing gum, you can stretch it quite a lot with very little energy. And then you can say it's extremely ductile but not very strong. A piece of copper wire, you can also stretch it an extremely far distance, but it takes more energy to do so. So that's both ductile and strong. And then if you apply that force over a certain distance, stretching out the wire, you can also reveal some of its toughness and how much energy it takes to stretch that wire before it breaks. Hardness is the last material property I want to mention, which is not any of the ones that I showed on the stress-strain curve. Hardness is the resistance to a little bit of plastic deformation. So assuming that you're already here, how much more energy do you have to put in to get the material to deform plastic? So very different material properties. I'll try and mention all what they are. So if we have a stress-strain curve like so, and it follows the elastic region and then deforms plastically, this point here is what we call the yield strength. Whatever that point on the stress axis is. This point right here, our strain to failure, we can use as a measure of ductility. This slope right here refers to the stiffness. And finally, this energy right here is something like the toughness. And the hardness isn't quite on this plot. So I want to see if you guys intuitively understand this, because the next lecture, I'm going to be throwing around the words like stiffness, toughness, ductility, hardness, compliance, hard, soft, whatever, and I want to make sure that you just at least intuitively understand. There's a few videos you may have seen before. Anyone here watch the hydraulic press channel? There we go. Finally, something that half the class does. We're going to predict what's going to happen in each of these cases based on these material properties. So in this case, this is a pressurized cylinder of CO2. It's made of aluminum, which is a very ductile material. It's also a very tough material. How do you think it will deform when smashed? Anyone ever tried this? Squishing aluminum stuff. What happens? AUDIENCE: You compress it. PROFESSOR: You compress it. And then what happens? AUDIENCE: Fracture? PROFESSOR: Will it fracture? AUDIENCE: After a while. PROFESSOR: After a while, OK. If you put a lot of energy into it, eventually, when you reach this strain to failure, it should fracture. But in your personal hands-on experience, does aluminum tend to fracture when you bend it a little bit? AUDIENCE: No. PROFESSOR: So then what words would you use to describe it? Based on this curve right here. Yep? AUDIENCE: Ductile. PROFESSOR: Ductile. I would say ductile and not brittle because you can bend it quite a bit or stretch it quite a bit before it fractures. How about stiffness? Is it really hard or really easy to get aluminum bending? AUDIENCE: It's pretty easy. PROFESSOR: It's fairly easy. So would you call that stiff or compliant? AUDIENCE: Compliant. PROFESSOR: Compliant. OK. What about strength? How hard is it to start deforming aluminum irreversibly, compared to something like steel? AUDIENCE: Not very. PROFESSOR: Not very. Especially pure aluminum. You can chew through it. If you guys ever got a one yen coin from Japan, you can chew through it. Not very strong. Then again, your bite force is also incredibly strong. But anyway, let's see what actually happens when you compress a rather ductile, compliant, and not that strong aluminum canister. Is it actually going? Oh, it actually skipped ahead. That's what I wanted, was their sound. It was also pressurized with CO2. But notice what's left. So actually watch in slow-mo. Look how much you can compress that, even after the explosion. No fracture. If you had done that with, let's say, a glass canister, what do you guys think would have happened? AUDIENCE: It would have shattered. PROFESSOR: It would have shattered. Yeah, we'll see that in a bit with a material that may surprise you. AUDIENCE: So it basically doesn't fracture, right? PROFESSOR: It will fracture eventually, but the hydraulic press can't get it that far in compression. So that would be something that's extremely ductile, not that strong-- so it wasn't that hard to deform. Certainly we know it wasn't stronger than the steel base plate that they used to do the smashing. Because whatever's the softer material is going to deform more. So here he's going to have-- well I'll let him describe it, and then I'll let you guess what's going to happen. What do you guys think is going to happen? You've got what looks like brass and copper coins on a steel base plate. Anyone have any idea? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. Everyone's making this motion, which means everything's going to flatten out, right? Let's find out. Not nearly as much as you might have expected. Is anyone surprised by this? What happened there? What actually happened there was already described up here. When you get enough dislocations piling up against each other during plastic deformation, you can undergo a process called work hardening. That process can be physically described by a lot of those dislocations piling up and making it more and more difficult to continue that deformation. So what happened here is the brass and the copper, which started out quite soft, not that hard, quite ductile, as you can see, and not that strong actually got stronger as they were deformed. Interesting, huh? Did anyone expect this to happen? OK. Let's go to one that I think everyone can guess what's going to happen, a lead ball. So has anyone ever tried playing with lead before? Hopefully not. I have quite a-- OK good, I'm not alone. How would you describe lead in terms of the material properties here? AUDIENCE: [INAUDIBLE] PROFESSOR: Yep. It's not very stiff. It doesn't take much energy to start deforming it. How else? Was it hard or soft? AUDIENCE: Soft. PROFESSOR: OK. Do you think it's ductile or brittle? Yeah? AUDIENCE: It's brittle. PROFESSOR: You think it's brittle. So by that, you mean it's just going to break apart, right? If you deform it? OK, cool. And would you say it is tough or not tough? Not a lot of folks have hands-on experience with lead. It's probably good for your brains. Let's find out. Lead pancake. So what words would you use to describe what just happened? AUDIENCE: It's ductile. PROFESSOR: Ductile indeed. I don't know what sort of brittle lead-- was it an alloy that you had been playing with, maybe? AUDIENCE: It was like a little sheet. It was just easy to snap. PROFESSOR: Aha, OK. So it was a sheet of lead that was easy to snap. So I would not call lead as a very tough material because you didn't have to put a lot of energy into it, but did it deform quite a bit before you snapped it or did it just crumble apart? AUDIENCE: Oh, it deformed. PROFESSOR: OK. So in that case, I would call it ductile because it deformed a lot before breaking, but I would not call it tough because it took very little energy to get it to that breaking point. And it wasn't that stiff because it was quite easy to get it-- let's say it's the amount of stress you put in versus the strain. It could be quite low. And it would not be very strong because it didn't take a lot of energy or stress to get it moving. Let's look at another ball. In this case, a steel ball bearing. What do you guys think is going to happen here? AUDIENCE: It's going to shatter. PROFESSOR: It's going to shatter. So you're guessing that the steel is brittle, right? What else? AUDIENCE: Probably pretty stiff and strong. PROFESSOR: Probably quite stiff and strong, yeah. I think so, too, but I don't think the guy that did this expected that. [INTERPOSING VOICES] PROFESSOR: Yeah. Did that surprise anybody? Yeah. Quite a surprise, right? So in this case, materials like hardened steel aren't necessarily that brittle. In fact, you wouldn't want a ball bearing to be brittle. If you get some small chip in it or a little bit of grit or sand in the bearings, you would shatter the ball bearing and cause instantaneous failure of the rotating component. So what you actually want out of a high-strength ball bearing is something that's extremely hard. Resists deformation so it doesn't undergo, let's say, change of shape that would prevent it from rolling without friction or with very little friction. You want it to be quite stiff because you don't want the load of whatever you're loading onto it to deform it, but you also don't want it to be brittle. So it's got to be somewhat tough and ductile to prevent sudden failure. You'd rather it compress a tiny bit than just cracking in half. So you can make things like ceramic ball bearings, which are very brittle, very stiff, not that tough, but also very strong, and you just have to make sure that whatever part you make is not going to reach any sort of yield strength criterion or crack or anything. Now the last one that's probably the most surprising. They bought a $4,000 diamond. It's a diamond like that big. What do you know about diamonds as a material in terms of these properties? AUDIENCE: They're hard. PROFESSOR: Yep, both is right. They're extremely stiff. It's the hardest material that we know of, almost. We've made slightly harder ones artificially. It's the hardest natural material we know of. What else? Do you know whether they're strong or tough? AUDIENCE: They're not tough. PROFESSOR: They're not tough. Why do you say that? AUDIENCE: Because it will shatter. PROFESSOR: Have you seen the video? AUDIENCE: [INAUDIBLE] PROFESSOR: Oh wow. OK. What else do we have? Yeah. So you're saying it's not tough. AUDIENCE: You can cut diamonds, right? PROFESSOR: You can cut diamonds with other diamonds. So the cutting action usually depends on the relative hardness of the material. So if you want to polish or cut something abrasively, you need to use a harder material because then the grit itself won't wear away before the material it's trying to cut. But what's going to happen here is we're going to put a diamond and try compressing it, and we'll see what its stress-strain curve looks like. So votes on what's going to happen. Who says, like Monica, it's going to shatter? Who thinks it's going to break the tools? Who thinks it's going to deform plastically? Yeah, I've never seen a diamond deform plastically. AUDIENCE: They still have pretty big chunks, though. PROFESSOR: Oh yeah, they could probably still sell those. Absolutely no deformation. It just rotates and explodes. Yeah. This would be a material that we would say has almost zero ductility. Despite being extremely hard, I don't know if there would even been enough deformation to have a slight dent in the tool itself. There's probably a little hole where the point of the diamond poked in, but once there was enough stress on that diamond, its stress-strain curve would look something like that. Maybe like that. Yeah. So it's important that you intuitively understand the differences between strength, ductility, hardness, toughness, and stiffness, because then next class, we can explain how radiation changes them. So any questions on the materials and properties from today? Yeah? AUDIENCE: Can you clarify why something is, for example, ductile versus brittle? PROFESSOR: Mhm. So the reason something would be ductile versus brittle is whether or not you can plastically deform it, and that means whether or not it's more energetically favorable for dislocations to keep moving versus just breaking a plane of atoms in any irregular direction and causing fracture. So again, ductility versus embrittlement is the interplay between slip and fracture. Slip is normally done by dislocation movement. Any defects created by anything, especially radiation damage, will make slip harder so that any continued energy you put in will not move dislocations but move towards fracture. If there's no other questions, we'll look at the stress-strain curves of some other familiar materials. It is 10:00, in case you guys have to go to other classes. AUDIENCE: Are you taking any nuclear activation stuff today? PROFESSOR: Yes. If you guys have things for a nuclear activation analysis, hand it in. You guys bring stuff in? We're running out of opportunities to do this. All right. In that case, the entry fee for the quiz will be your nuclear activation analysis sample. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 12_Numerical_Examples_of_Activity_Half_Life_and_Series_Decay.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: You guys asked to do some numerical examples of the stuff that we've been learning today, and I've got a fun one for you. A very real one, because it happens all the time in reactors all over the world. So to set the stage for this, hey, suppose I had-- just suppose, I had a radioactive cobalt-60 source, that was calibrated in March, 2011 to be approximately one microcurie. And it's now October 2016. So let's start off with the easy part. Let's say I posed the question, how active was this source actually when it was made? Because it just says-- what does it say-- 1 microcurie. Supposedly, this was 1 microcurie on March, 2011. What does that mean the actual activity could be? There's not a lot of confidence in that number. Remember significant figures from high school chemistry and physics. This is when they're important, because you got to know what you're buying. So what is the actual activity of this source when it was calibrated? What is our uncertainty on this? Anyone remember this from sig figs? I hope so. If not, I'll refresh your memory. It's plus or minus the next decimal point. So really, this was 1 plus or minus 0.5 microcuries. So you might have a 1/2 microcurie source. You might have a 1 and 1/2 microcurie source. They specifically decided to leave out the second decimal point so they're not liable for a source that's out of that calibration level. So we supposedly have a 1 microcurie source. And I've measured it on-- let's say it's now-- October, 2016. Suppose I went and made a measurement, and it was 0.52 microcuries. And we want to know, how active was this source actually when we got it? Where would I begin? AUDIENCE: Look up the half-life. MICHAEL SHORT: OK, yeah, that's why I've got the table of nuclides right here. So let's look up the half-life of cobalt-60. And there it is right there, 1,925 days. So we know that the half-live for cobalt-60 is 1,925.4 days, which equals-- and I pre-did the math out so I wouldn't spend lots of time on the calculator. Activity, how many seconds is that? 1.66 times 10 to the eighth seconds. And we also remember that activity equation, where the activity as a function of time equals the original activity times e to the minus lambda t. The only thing missing here is the initial activity and lambda, so how do we find lambda? Anyone remember that expression? AUDIENCE: Log of 2 over half-life. MICHAEL SHORT: Yep. And we know that lambda equals log of 2 over the half-life, which in this case is about 0.693 over 1.66 times 10 to the eighth, which equals 4.17 times 10 to the minus 9 per second. And so now it's pretty easy. We know what lambda is. We know what our current A is. So we can say that our original activity is simply our current activity divided by e to the minus lambda t. What's t? Well, I made an approximation here. It's been about five years and seven months. I assumed that a month has 30 days, on average, which comes out to t-- I remember calculating this already-- 1.74 times 10 to the eighth seconds. So we plug in that t right here, plug in that lambda right there, and we get our initial activity-- what did I get?-- was 1.07 microcuries. Now hopefully, this result is fairly intuitive. Because the half-life of cobalt-57 is 1,925 days, which is about 5 and 1/4 years. And it's just over 5 and 1/4 years since this source was calibrated. And we have just under half of the original activity. So hopefully that's an intuitive numerical example. Now, let's say, how many atoms of cobalt-60 did we have? Or better yet, what was the mass of cobalt-60? Where would I go for that? I'll give you a hint. It's on the screen. AUDIENCE: Are you saying the mass that we have in the pellet? MICHAEL SHORT: Yeah, so for this plastic check source right here, what's the actual mass of cobalt-60 that we put in in the beginning, again, supposing I had one of these right in front of you? AUDIENCE: Look at the binding energy. MICHAEL SHORT: You could look at binding energy, which is a form of mass. Or luckily, we've got the atomic mass right up there in AMU. So again, this is a quick review of high school chemistry. We need to find out how many atoms we have in this pellet. And once we know the number of atoms-- and we have the molar mass up there in either AMU per atom or same thing as moles per gram, pretty much-- we'll know what the mass of cobalt-60 was. So one important conversion factor to note is that 1 curie of radiation is 3.7 times 10 to the 10 becquerel. And remember that 1 becquerel is 1 disintegration per second. So if we know the initial activity of our material, using our decay constant, we should know the number of atoms that we had right there. So we know our initial activity, A0 is 1.07 microcurie. And anyone remember, what's the relation between the activity and the number of atoms present? Just yell it out. I hope you'd know that by now. Does this look familiar to anyone? Where the activity is directly proportional to the number of atoms there, times it's decay constant. So since we now know A0, we can find N0, because now we know lambda as well. So the number of atoms we had at the beginning is just, our activity over lambda. And our activity is 1.07 microcuries. Let's convert up to curies. So we know that there is 1 curie in 10 to the 6 microcuries. And we'll use that conversion factor right there, times 3.7, times 10 to the 10 becquerel per 1 curie, divided by our decay constant, 4.17 times 10 to the minus 9 per second. Let's check our units to make sure everything comes out. I brought a canceling color. We have microcuries on the top, microcuries on the bottom, curies on the top, curies on the bottom. And we have a becquerels, which is a disintegrations per second. And we have a per second down on the bottom. So the per second goes away and the becquerels just becomes atoms. And so we actually get, N0 is-- I think it's something times 10 to 12th. Yeah, 9.5 times 10 to the 12th atoms. That's the way it looks. So the final step, how do we go from number of atoms to mass? Can anyone tell me? AUDIENCE: Convert it to Avogrado's number and moles. MICHAEL SHORT: Yep. So the last thing we'll do is we'll say we have 9.5 times 10 to the 12th atoms, times Avogadro's number, which is 1 mole in every 6 times 10 to the 23rd atoms. Then we go to the table of nuclides to get its atomic mass. This is one of those situations where you don't need to take the eighth decimal place, because you're not converting from mass to energy. You're just getting mass. So if you might wonder, where did all the decimal points go, think of what type of calculation we're doing. If we started off with a one significant digit number, do we really care about keeping the eighth decimal place in the rest of everything? No, definitely not. We're not turning AMU into MEV, in which case the sixth decimal point could put you off by half an MEV, or like the rest mass of the electron. We're just getting masses here. So let's just call that 59.9-- that's enough for me-- 59.9 grams in 1 mole of cobalt-60. And just to confirm, we have atoms here, atoms there, moles there, moles there. We should get a mass in grams. And this came out to 0.95 nanograms of cobalt-60. Not a lot of cobalt-60 can pack quite a wallop in terms of activity. So even though this has a pretty long half-life, as far as isotopes go, it takes very little of it to have quite a bit of activity, certainly enough for our fairly inefficient handmade Geiger counter to measure what's going on. Pretty neat, huh? Cool. Now, let's say I ask you another question, a simpler question. How many disintegrations per second are coming out of that cobalt-60 source? Let's say we wanted to find the efficiency of our Geiger counter. You'd have to know how many counts you measure. You'd have to know how far away your source is. And you'd have to know how many disintegrations per second are happening. So how would I get to the number of disintegrations per second? AUDIENCE: Convert it to becquerels. MICHAEL SHORT: That's right. We'll just take our current activity, which is 0.52 microcuries. And so we'll say, 0.52 times 10 to the minus 6 curies, times 3.7, times 10 to the 10 becquerel in 1 curie is-- well, we have curies here. We have curies there. And this comes out to-- what was our current activity. It was in the tens of thousands. About 19,000 becquerels. So we know that, right now, this source is giving off about 19,000 disintegrations per second. Is that how many gamma rays it's giving off? Do we know that yet? AUDIENCE: No. MICHAEL SHORT: Why not? I heard a lot of no's. What other information do we need to know? AUDIENCE: Type of radiation. MICHAEL SHORT: Sure. Luckily we've got that right here. So if you look here, the mode of decay is beta decay to nickel-60. So cobalt-60 is actually primarily a beta source. However, it's used for its characteristic gamma rays. So let's take a quick look. We look at its decay diagram, it's pretty simple. And let's just say, somewhere near 100% of the time, it beta decays up to this energy level and can undergo any number of transitions like this. The only two we really tend to see is, like that one and that one are the most likely ones. So on average, each disintegration of a cobalt-60 atom is going to produce two highly energetic gamma rays. So actually, what you'd have to know for this source is, despite it being 19,000 becquerels of cobalt-60, it's giving off 38,000 gamma rays per second. So that way your source calculations wouldn't be off by a factor of 2. Then if you know the distance between your source and your detector, and you know how many of those gamma rays are going through the detector itself-- which we can count calculate with a solid angle formula, which I'll give you a little later-- you'll know how many of them should interact in here, once you learn photon nuclear interactions. And you'll know how many of them actually get captured. And that's how you can get the efficiency of the detector. So we've kind of filled in half the puzzle. You now know, for a fixed source, how many atoms there are, how many disintegrations there are and how many gamma rays you expect it to give off. And then later in the course, we'll tell you how to figure out how many of them make it into the detector and how many of those should interact in the detector. Is everyone clear on what we talked about here? Yep. AUDIENCE: Where did you get the number for 38,000 gamma rays? MICHAEL SHORT: Double the number of cobalt disintegrations. AUDIENCE: Because we assume it drops 2? MICHAEL SHORT: Yeah. So I'm ignoring the 0.2% and 0.6% decays because they're extremely unlikely. And having looked this up ahead of time, I know that this transition and that transition are by far the most likely ones. And if you don't know that-- AUDIENCE: Yeah, does it say it down there? MICHAEL SHORT: It sure do. Those right there, the 99.9-something intensity ones, it's all there. So I had some questions on Piazza. What happens when we can't read the pixelated decay diagram? That's just there for fun. The actual table that will tell you all the information you need to know is right below. And I just decided, all right, forget the ones that are 2e to the minus 6 likely or point 0.007, whatever, too many 0's. So you can say that, on average, to probably two or three significant digits, that gamma ray, that gamma ray are the only ones you tend to see. Is everyone clear on how I made that determination? Cool. Yes. AUDIENCE: So are disintegrations just a beta, and then for each one of those, it's two gamma rays? MICHAEL SHORT: That's right. The number of disintegrations is the number of atoms that leave this position. So let's take a crazier example. I've already had you look at americium-241. Sort of bring home the message that the number of americium disintegrations does not necessarily equal the number of gamma rays that you will expect to see. Because any one of these is possible. There is an 84% likely one. It looks like it goes to the third level from the bottom, because it's just the third number I'm seeing here. If you don't believe that, then check how much you have to zoom out to see what's going on. Let's see. Intensity, yep. The third most energetic alpha ray, so the third from the bottom, is indeed the 84% likely one. Then there's another one that's 13% likely, so we can't discount that either. So you're going to hit anything from, like, the second to the fourth energy level and any number of those gamma cascades that come off there. So the intensity of your source in becquerels or curies does not immediately tell you how many gamma rays or other disintegration products you should expect. There are some pretty simple ones, like the dysprosium one that we saw. Was it 151? I've been here before. It's not purple. No, that's a complicated one. All right, I don't remember which dysprosium isotope just had a single decay. But the only time that you would get one particle per disintegration is if your decay diagram looked like this. That's your parent and that's your daughter. That's the only time you can expect one thing. If you have a more complex decay diagram than that, you'll be looking at more than one quantum of radiation of some form per disintegration. So is that unclear to anybody? Cool. Now let's get into a more fun problem. And I'll leave this up here, since we're going to refer to it a fair bit. So this is a good example of activity, half-life, mass, number of atom calculations. We're going to use this information to answer a more interesting question. How was this made? Anyone have any idea what sort of intentional nuclear reaction could have produced cobalt-60? And I'll go back to cobalt-60 so we can get its proton number. There it is. Can you say it a little louder? AUDIENCE: [INAUDIBLE] Oh, I said neutron bombardment of cobalt-59. MICHAEL SHORT: Sure, you can have a neutron bombardment. So you can have, cobalt-59 absorbs a neutron, becomes cobalt-60 with some half-life. And then we'll decay with some decay constant lambda, and become-- in this case, it undergoes beta decay-- to nickel-60. Let me make sure I got the proton number right. I'm going from memory here. Hooray. OK, how do we set up the series radioactive decay and production equations to describe this phenomenon? So we have a physical picture of what's going on here. How do we construct the differential equation model? The same way we've been doing it Friday and Tuesday. So someone, kick me off. What do we start with? AUDIENCE: So the production of cobalt-59 is 0. Destruction would be sigma times flux times the amount of cobalt. MICHAEL SHORT: OK, let's make a couple of designations. Let's call cobalt-59-- I'll use another color for this, just so it's clear. We'll call cobalt-59 N1. We'll call cobalt-60 N2. We'll call nickel-60 N3, just to keep our notation straight. And then continuing our notation pile here, you mentioned that there's going to be some sigma, some microscopic cross-section times some flux, the number of neutrons whizzing about the reactor, times the amount of cobalt-60. So let's stick our DN, Dt, equals no production minus some destruction. Does this look eerily familiar to any of you? Forget the fact that it's a sigma and a flux. Just treat those as constants. What does this look exactly like? AUDIENCE: The minus lambda D. MICHAEL SHORT: Exactly. It looks just like dN1 dt equals minus lambda and 1. Yeah, that's exactly it. This, in effect, is like your artificially-induced decay. So the probability that one atom of N1 absorbs a neutron, times the number of neutrons are there, gives you some rate at which these atoms are destroyed. Just like a lambda gives you the rate at which those atoms naturally self-destruct. So you can think of the sigma times flux like a lambda. Mathematically, they're treated identically. The only difference is, we're imposing this neutron flux. So it's like an artificial lambda, which means solving the equations and setting them up is exactly the same. So how about N2? What's the production and destruction rate of cobalt-60? First of all, someone yell it out, what's the creation rate? AUDIENCE: Lambda N1. MICHAEL SHORT: Which lambda? AUDIENCE: Oh, lambda 1 I mean. MICHAEL SHORT: So what is lambda 1, this one? AUDIENCE: Sigma, yeah. MICHAEL SHORT: OK, so let's keep with these notations. I'm going to say this is like a lambda artificial, but I'm going to keep with sigma flux and 1. And just so we keep our notation straight, I want to be able to cleanly separate the natural and the artificial production and destruction. And what's the destruction rate? AUDIENCE: Sigma phi N2. MICHAEL SHORT: So you said sigma phi N2. Is that the physical picture we have up here? Not quite, because like we can see here, cobalt-60 self-destructs on its own, right? Yeah. However, that actually is a correct term to put in. The other one that we're missing would be minus lambda N2. So you know what? Let's escalate this a bit into reality and say, we're going to do this. The actual equation is not going to be that much harder. So thanks, Sean. Let's do this. Yeah. AUDIENCE: Would those sigmas be different? MICHAEL SHORT: Yes, good question. That's what I was getting to. So there is an absorption cross-section for every reaction for every isotope. And now I'd like to show you guys where to find them. On the 22.01 site, there is a link to what's called the Janis Database, which is tabulated and plottable cross-sections of all kinds. So every single database that we know of, it's updated continuously by the OECD, so you can trust that the data is updated. I won't say accurate, because some of these cross-sections are not very well known. So I've already started it up. In this case, we don't have neutron capture data for cobalt-60. Let's keep it in the symbols for now. And then later on, we're just going to say, it's probably 0. But let's keep it together. So for cobalt-59, double-click on that, and you are presented with an enormous host of possible reactions. Like right here, you have N comma total, which is the total cross-section for all interactions of neutrons with cobalt-59. And you can see that this varies, but the same sort of shape that I was haphazardly drawing. It's low. There are some resonances. And then it continuously increases at lower energies. Is this the right cross-section to use? If it accounts for every possible interaction of a neutron with cobalt-59. No. So what other reactions, besides the one that we want, which is capture, could this account for? AUDIENCE: Scattering. MICHAEL SHORT: Yep. AUDIENCE: Fission. MICHAEL SHORT: Scattering, fission, n2n production. Sometimes one neutron goes in and two come out, like for beryllium, like we talked about at the beginning of class. So you've got to know to choose the right cross-section. And in nuclear reaction parlance-- that's shorthand right there-- that's N comma total. That accounts for elastic scattering, inelastic scattering to any energy level, capture, fission, n2n reactions, sometimes proton release, sometimes exploding, whatever nuclei do. So let's look at our other choices. I'll shrink that. We have the elastic cross-section. Compare that with a total, and you get a general idea of how much the total at elastic cross-sections actually matter. So a lot of those resonances are elastic cross-section resonances, but there are other reactions that are responsible for a lot of the other craziness going on. So let's unselect that. Oh, I'm sorry. I meant MT 1 Let's compare MT 1 and MT 2. So MT 2, elastic. There we go. OK, that's what I was hoping for. So right now, the red one is the total cross-section, and the green one is the elastic cross-section. And you can see that, at high energies, the total cross-section is mostly the elastic cross-section. But at low energies, especially right around here at the thermal energy of neutrons in a reactor, there's something else responsible. That's probably what we're going after. So let's keep looking through this Janis Database. Hey, there's the n2n reaction, if you guys want to see how likely this is. It's another one of those reactions that-- look at that-- it's 0 until you get to 11 MEV. So what do you guys think the q-value for n2n production of cobalt-59 is? AUDIENCE: Very negative. MICHAEL SHORT: How negative? It's on the graph. AUDIENCE: 10. MICHAEL SHORT: Yeah, negative looks like 10, or maybe 10 and 1/2, MEV. AUDIENCE: 0.454. MICHAEL SHORT: Oh, hey, awesome. [LAUGHTER] AUDIENCE: I think it moves it around. MICHAEL SHORT: So there you go. Yeah, indeed, it's very energetically unlikely to fire in one neutrons and get two. But if you have a 10.454 MEV neutron, you can make it happen. Pretty cool, huh? That's not the reaction we're going for. What we want-- let's see if I can find it. Proton plus neutron, neutron plus deuteron, all the inelastic energy levels. There it is, capture. The gamma reaction here is what's referred to as capture. And there we go, a nice normal-looking cross-section. So for cobalt-59, if we go down to about 0.025 EV here, read it off, it's about 20 barns. Because you guys asked for a numerical example. So let's say that our capture cross-section for cobalt-59 is about 20 barns, which is to say 20 times 10 to the minus 24th centimeter squared. Let's also put up our capture one for cobalt-60. So let's go back to our table, take a look at cobalt-60. I think I know what the answer is going to be, which is, we don't know. Not in this database, unfortunately. So for symbolism, let's keep it there. But we're going to say, well, we don't know. Yeah. So let's designate these different cross-sections. Let's call it sigma-59 and sigma-60. So those will be our two cross-sections. We'll just call this one sigma-59, and call this one sigma-60. And we already know the lambda for cobalt-60. So let's say the lambda for 60 cobalt, from this stuff up here, 4.17 times 10 to the minus 9 per second. Let's just refer to that as lambda for ease of writing things down. So we've got a complete set of reactions for a dN2 dt. What about dN3? What's the production rate of N3? AUDIENCE: Lambda MICHAEL SHORT: Yep, lambda N2. Anything else? What about the destruction rate? Is what? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: It's a stable isotope. AUDIENCE: We don't know. MICHAEL SHORT: But you were on to it, Sean. So what would you add, based on what you added to N2? AUDIENCE: Sigma whatever, phi N3. MICHAEL SHORT: Yeah, there's going to be some new sigma-- let's call it sigma nickel-60, phi N3. AUDIENCE: Yeah, is the sigma going to be 0 this time around, or is it actually going to be-- MICHAEL SHORT: Let's find out. Let's go to our tables. There, there's data for nickel-60. Let's look up its absorption cross-section. So we'll scroll down to our Z gamma, our capture cross-section, plot it. Take a look at around 1e minus 8. And it's like 2 barns-- not negligible. So our capture cross-section for nickel-60-- I keep overwriting myself, and then I remember it's a blackboard. 2 barns, which is 2 times 10 to the minus 24th cm squared. So we can just refer to that as sigma nickel. For the purposes of this problem, we don't particularly care how much nickel-60 we're making. Nickel-60 is a stable isotope of nickel. Eh, forget it. So for the purposes of this problem, forget the N3 equation. We don't care how much stable nickel-60 that we're making, because it's a lot cheaper to get it out of the ground, probably something like 10 orders of magnitude cheaper. Yep. AUDIENCE: Where did you pick that incident energy from, the MEV [INAUDIBLE] 1 times 10/8? MICHAEL SHORT: Yeah, so the incident energy I picked-- because one thing we had gone over is that the thermal energy of a neutron is around 0.025 EV, which is equal to 2.5 times 10 to the minus eighth MEV. So I took that value, about 2.5 times 10 to the minus eighth MEV, so around here, and just went up. And on a log scale, it looks to be closest to 2 barns. You can always get the actual value. So if you want to zoom right in, I'm going to keep zooming into the 10 to the minus eighth region. Maybe not. You can set your bounds accordingly. Oh, 0 or negative values-- ah, OK, whatever. Let's just read off the graph for now. You can use this tool to get the actual value. But for problems like this, I think estimating it from the graph is going to be fine. When you get into 22.05 and you're like, what's the actual flux in the reactor to within 1% or something, estimating from the graph is no longer allowable. There are tabulated values of these things. Oh, yeah, so you can actually set the plot settings, get the tables. Oh, tabulated-- there we go. So you can read off values of the cross-sections from a table like that. But for now, since we want to make sure to get this problem done, let's just stick with the graph. So we don't care about the N3 equation. We're also going to say, well, let's not ignore sigma-60 yet. So how do we solve this set of equations? Let's make a little separation here. First of all, the easy one-- what's N1 as a function of time? AUDIENCE: e to the minus sigma-59, phi. MICHAEL SHORT: e to the minus sigma-59, phi, and what else? AUDIENCE: N0. MICHAEL SHORT: There is an N1 0, and there's a t. Because as you're burning it out, it matters how much time you have. Doesn't this look eerily similar to an N equals N0 e to the minus lambda t? Again, it's like exponential artificial decay, because we're burning those things out. For a fixed amount of neutrons going in, the amount that we burn is proportional to the amount that is there. So that's the burn rate. That's the amount that's there. This is our artificial lambda. So that equation's easy. What we really want to know is, what is N2 as a function of time? That's the $60 million dollar question. And I'm not exaggerating there, because cobalt-60 is expensive. So the question I'm posing to you guys on the homework-- so for those of you who have started problem set 4, I'm going to be swapping out the noodle scratcher problem to this problem that we're going to begin together in class. And you guys are going to finish on the homework. So I'm kind of giving you help on the homework. What is N2, the amount of cobalt-60 in your reactor, as a function of time? And what is your profit for running the reactor as a function of time? Assuming a few things-- so let's set up some parameters. I'm going to say that the neutron flux is the same as that in the MIT reactor, which is about 10 to the 14th neutrons per centimeter squared per second. We already have all of our lambdas. We have all of our sigmas. I'm going to say that the cost of running the reactor is-- and I have a quote on this-- $1,000 per day, which is the same as $0.01 per second. Not a bad rate to stick something in the reactor, right? It's not bad at all. If you had to build your own reactor, your daily operating cost would actually be $1 million a day for a commercial power plant. So every time a plant goes down, you lose $1 million a day, plus the lost electricity or whatever that you have to buy. So let's put that in there. And from the cost of this hypothetical cobalt-60 source, we know that cobalt-60 runs about $100 per microcurie, because these sources run for about $100. And so the eventual problem that we're going to set up here and you guys are going to solve on the homework-- and we can keep going a little bit on Friday if you want-- is, at what point, at what t do you shut off your reactor and extract your cobalt-60 to maximize your profit? And this is an actual value judgment that folks that make cobalt-60 have to make. How long do you keep your nickel target in there and not hit diminishing returns? Because you're always going to be, let's say, increasing the amount of cobalt-60 that you make if your source is basically undefeatable, until you reach some certain half-life criterion. But it might not make financial sense to do so. So let's start getting the solution to N2. Let's see. So I'm going to rewrite our N2 equation and we can start solving it. I'm sorry, that's a dN2 dt, equals sigma-59, flux, N1 minus lambda N2 minus sigma-60 flux N2. So how do we go about solving this differential equation? AUDIENCE: Integrating factor. MICHAEL SHORT: Yep, the old integrating factor. First, we want to get rid of the N1, because that's another variable. And we've already decided right here that N1 is N1 0 times e to the minus sigma-59 flux t. And we haven't specified, what's N1 0? Let's do that now. The last number we'll put in is, we started with a 100 gram source of cobalt-59. When we write it in isotope parlance, it sounds exotic. But that's actually the only stable isotope of cobalt. So that's just a lump of cobalt from the ground that we stick in. So we know what N1 0 is. So we can now rewrite this equation as-- let's just go with N2 prime for shorthand. And we'll put everything on one side of the equation. So we'll have, plus lambda plus sigma-60 phi N2, minus sigma 59 phi times e to the minus sigma-59 phi t, equals 0. So what is our integrating factor here? AUDIENCE: e to the lambda plus sigma-60 phi t. MICHAEL SHORT: Yeah. So it is, e to the integral of whatever is in front of our N2, lambda plus sigma-60 phi dt, which equals e to the lambda plus sigma-60 phi t. So we now multiply every term in this equation by our mu, our integrating factor. So let's say we have N2 prime times-- I'm just going to use mu for shorthand, since it's going to take a long time to write. Plus-- let's see, that right there is like mu prime, because if we take the integral of mu times N2-- let's see. Oh, yeah, this is right. So we have lambda plus sigma-60 N2 times mu, minus mu times sigma-59 phi e to the minus sigma-59 phi t, equals 0. This stuff right here is like the an expanded product rule, so we can write it more simply. So we can say, N2 times mu prime-- let's see-- equals mu sigma-59 phi e to the minus sigma 59 phi times t. So next we integrate both sides. And we get N2 times mu equals-- let's see. Let's expand everything out now. So that stuff would be sigma-59 phi e to the lambda plus sigma-60 phi, minus sigma-59 phi times t. So if we integrate all of that, we're going to get sigma-59 phi. I think there's an N0 missing here, isn't there? Let's see. There should be an N1 0 missing here. Yep. There's an N1 0 that I dropped for some reason. Let's stick that back in-- N1 0, and N1 0. N1 0 over that stuff, lambda plus sigma-60 phi minus sigma-59 phi, times whatever is left, e to the lambda plus sigma-60 phi, minus sigma-59 phi t, plus C. So now we can say, what's our integration constant C? The last thing we haven't specified is our initial condition. So let's assume that when we started our reactor, there was no cobalt-60. That makes for the simplest initial condition. So we can substitute that in here. So at t equals 0, N equals 0. So we've got get 0 equals-- if t is 0, then that just becomes sigma-59 phi N10 over lambda plus sigma-60 phi, minus sigma-59 phi, plus C. And so that makes things pretty easy, because we know C equals minus sigma-59 phi N1 0, over that stuff that I keep saying over and over again. And then we've pretty much solved the equation. The last thing we have to do is divide by mu, and we'll end up with the same solution that we got on Friday and the same solution that we got on Tuesday. So I realized, the second after I said it last time, that, oh, we can't just absorb some e to the something t into our integration constant C because there's a variable t in it. So I would say, look at this derivation to know the whole solution. And so finally we end up with-- anyone mind if I erase a little bit of this stuff up top? OK, I'll erase the decay diagram because we're not using that anymore. And hopefully everybody knows our conversion factor. So the end solution, N2 of t, would look like, sigma-59 phi N1 0, over lambda plus sigma-60 phi, minus sigma-59 phi, times e to the minus-- what's lambda 2 in this case? Lambda plus sigma-60 phi t, minus e to the minus sigma-59 phi t. And that right there is our full equation for N2. Now you guys said, let's make this numerical. OK, we have every numerical value already chosen. I've already plugged these into the Desmos thing, so you can see generally how this goes. So we've solved it theoretically, so now let's make this numerical and make some sort of a value judgment, right? We know sigma-59, because we just looked that up. We know phi. We impose that as 10 to the 14th neutrons per second. There it is. We know our lambda. We don't know our sigma-60, so we're just going to forget that for now. But the point is, for everything except time and N2, we have numerical constants for this. So once we plug it all in, I modified the Desmos example from last time to have the actual unit. So you can see that our fake L1, our lambda-- we'll call it lambda 1-- equals sigma-59 times phi, which is 20 barns. 20 times 10 to the minus 24th centimeters squared, times 10 to the 14th neutrons per centimeter squared per second. And the centimeter squareds cancel. That becomes, to the minus 10. And we get that our lambda 1 is like 2 times 10 to the minus 9 per second. Our lambda 2, well, we already have that, 4.17 times 10 to the minus 9 per second. So this is one of those cases where lambda 1 approximately equals lambda 2. So just like you see in the book, when you plug in all the numbers, you get a very similar equation. Which is to say, there's going to be some maximum of cobalt-60 produced. And in this case, the x-axis I have in seconds, because that's the units we're using for everything. The y-axis is number of atoms. So right there, 6 times 10 to the 23rd, that's one mole. So at most, you can make up about 1/3 of a mole of cobalt-60 out of 100 moles of cobalt-59, which means you're never actually going to have one mole of cobalt-60. Because of the way that our natural and artificial decay constants work out, because they're fairly equal, you're never going to be able to convert and harvest it all. That's the numerical output of this. Now I have another question for you. Is the top of this curve necessarily the profit point for this reactor? No, good answer. Why do you say no? AUDIENCE: Well, you have to write another equation for the costs and the profits to maximize both of them. MICHAEL SHORT: Exactly. That's what you guys are going to do on the homework. I think we've done the hard part together here. And so now I want you guys to decide, given those profit parameters-- and I will write them down on the Pset-- how long do you run your reactor to maximize your cobalt-60 profit? So this is one of those examples where we did the whole theoretical derivation, we decided, yes, let's escalate the situation for reality. Everything works out just fine. The final answer, well, you just tack on this extra artificial bit of decay from the cobalt-60 being in the reactor. But the form of the equation is exactly the same. There's just a couple other constants in it for reality. Then if you plug in all the numbers for the constants, so you pretend like that stuff is lambda 2 and that stuff is lambda 1, there's lambda 1 again, there's lambda 2, there's lambda 1, and it's exactly the same equational form as the original solution that we had. When you plug in all the numbers, you get something remarkably similar to what's in the book, just scaled for actual units of atoms and seconds. Is there a question? Yeah. AUDIENCE: On the homework, do you want us to just assume that sigma-60 is 0? MICHAEL SHORT: Sure. AUDIENCE: And have that all cancel out? MICHAEL SHORT: If you can find it, that's great. But I couldn't find it that easily. You can hunt through the different databases in Janis to try to find it. But I'm not going to penalize you if you can't find it, if I couldn't find it. So yeah, a lot of the homework is going to be redoing this derivation for yourself. Because I want to make sure that you can go from a set of equations that models an actual physical solution. And I guarantee you you'll have another physical solution on the exam. Solve them using your knowledge of 1803, get some sort of a solution, which looks crazy, but it comes from straightforward math. Then plug in some realistic numbers and answer an actual question. How long should you run your reactor to maximize your profit? So it's kind of neat. We've been here one month together, and you can already start answering these value judgment questions about running a reactor. And so again, I don't know who did, but I'm glad you asked, is this field mostly simulation? And the answer is no. You actually have to use math to make value judgments if you want to go and make isotopes. And then you go and make the isotopes. And you sell them to people like me so I can bring them into class and scare unwitting members of the public, theoretically. Yeah, hypothetical source indeed. So any questions on what we did here, from start to finish? Yep. AUDIENCE: On the equation right in the middle there, where it says-- in parenthesis, it has lambda plus sigma-60-- MICHAEL SHORT: This one? AUDIENCE: Yeah. Was the phi drop just like a-- MICHAEL SHORT: Oh, yeah, that was a mistake. AUDIENCE: OK. I can just pull it back later. MICHAEL SHORT: Yeah, it probably means I was talking and thinking at the same time and forgot to write that. But indeed, it's back everywhere else. Thank you. Yep. AUDIENCE: So the lambda in the final N2 function equation, is that lambda 1, the theoretical lambda 1, fake lambda 1, or is that lambda 2? MICHAEL SHORT: This lambda right here is the actual lambda for cobalt-60. Yep. This right here is just an analogy I'm drawing to say that, it's almost like that stuff is lambda 1. That's our original artificial decay constant. And this stuff here is like our lambda 2, because there's natural decay and then there's reactor-induced destruction. Yep. AUDIENCE: What is that factor of dividing by lambda [INAUDIBLE].. Where does that come from? MICHAEL SHORT: That comes from this solution right here. There's another interesting bit, too. Did we necessarily say-- let's see. Did we necessarily say that-- no, never mind, that's fine. That comes from-- let's trace it through. So mu contains-- yeah, it comes from C. That's right. So we had it over here, because that's part of our solution for-- let's see. Where would we trace it back to? It starts off here in the differential equation. It starts off here as well. So that's part of what's inside mu. OK, that's where it came from. So a mu contains this stuff. Yep. AUDIENCE: And then once you integrate, it just becomes e to the that. It doesn't actually become lambda plus 65. MICHAEL SHORT: It becomes e to the lambda plus 60 phi, times t. AUDIENCE: Yeah, but when we do the integration of that, we don't get any factors of lambda plus 60 phi coming down. MICHAEL SHORT: We do, actually. There is a mu stuck in right here. And so I just wanted to say that, expanding this term comes out to lambda plus sigma 60 phi, minus lambda sigma-59 phi, times t. So when you integrate this whole term-- and again, there's an N0 that should be there. You do bring this whole pile in front of the t down on the bottom of the equation. So that's where it comes from. Yeah, cool. I don't think there's any more missing terms. OK, maybe time for a last question, because it is 10 o'clock. AUDIENCE: This doesn't really have to do with your derivation or anything. So I'm pretty sure you also already explained this. But why can you put a cross-section, like that's a measure of probably in units, of centimeters squared. How does that [INAUDIBLE]? MICHAEL SHORT: Ah, so the cross-section is almost like, if you fire a neutron at an atom, the bigger the atom appears to the neutron, the more likely it's going to hit it. So it's kind of a theoretical construct, to say, if something has an enormous cross-section, it's like shooting a bullet at a gigantic target, with a high probability of impact or interaction. Something with a small cross-section, there's still only one atom in the way, but it's like you're shooting a bullet at a tiny target and have less of a chance of hitting it. Does that makes sense? Cool. AUDIENCE: So is it just a theoretical construct, or can you actually relate it to an actual physical cross-sectional area? MICHAEL SHORT: You can't relate it to a physical cross-sectional area, as I know. It's not like a certain nucleus has a larger cross-sectional area. Otherwise, things like gadolinium, which has a cross-section of 100,000 barns, would just be a larger atom. And it's not. And yeah, Sean. AUDIENCE: Are they determined only experimentally, or do we know some of way to calculate it? MICHAEL SHORT: Good question. They can be theoretically calculated in some cases. In the Yip book, Nuclear Radiation Interactions, he does go over how to calculate those from quantum stuff. And so you'll get a little bit of that in 22.02, in terms of predicting the cross-section for hydrogen and the cross-section for water. And molecular water is not just the sum of its parts. That's the kind of crazy part. Cross-sections do change when you put atoms and molecules together, just tend to be at lower energies, around thermal energies and such. Let's say, all of them probably can be theoretically calculated, just not that easily. But the really simple ones you can predict theoretically. Predicting the resonances in those cross-sections, that's tough. Let's look at a simple cross-section, like hydrogen. AUDIENCE: Can't you calculate it using simulations? MICHAEL SHORT: Yes. Like if you know, let's say, the full wave function for a given atom or for all the electrons in an atom, you should be able to. So let's do N total for hydrogen. Much simpler, this is the kind of thing that can be predicted from theory quite easily. In fact, you will be doing this in 22.02. The other one, no, I don't expect you to be able to predict this. But you will learn why the resonances are there and why they take the shape that they do. So last thing-- we did go like three minutes late, but everyone's still here. You can go if you have to, by the way. I can't keep you here. If you want to know, if you want to make this equation more realistic and account for every possible energy in the reactor, you can make these cross-sections a function of energy, and integrate over the full energy range. And this is actually how it's done. And you will do this in 22.05, where you'll be able to take the energy-dependent cross-section in tabulated or theoretical form, and then integrate this whole equation, and also account for the fact that the flux has an energy component. Usually, it looks something like-- in a light-water reactor, if this is energy and this is flux, there'll be a bit of a thermal spike. There won't be much going on in the middle. I'm sorry, a fast spike, and there will be a thermal spike. And knowing how many neutrons are at every energy level, what's the probability of every neutron at energy level interacting, and what are the cross-sections at every energy level integrated over the full energy range is what gets you the accurate correct solution. What we've done here is called the one-group approximation, where we've assumed that all the neutrons have the same energy, thermal energy, which is an OK assumption for thermal light-water reactors. And it'll get you a good estimate. The more neutrons you have at different energies, the less good that estimate becomes. Yeah. AUDIENCE: Wait, so that thermal energy you gave us, like 0.02 [INAUDIBLE],, that was estimated for the energy of the neutrons being fired. MICHAEL SHORT: Let's say you have a neutron at about 298 Kelvin. From the Maxwell-Boltzmann temperature distribution, you can turn that temperature into an average kinetic energy. And that average kinetic energy will give you a velocity. And that velocity is around 2,200 meters per second. And that average kinetic energy happens to be about 0.025 EV. So thermalized neutrons, like the ones flying about in the reactor, are moving quite slowly at just 2,200 meters a second, compared to the fast neutrons, which can be moving closer to the speed of light, not that close, but much, much, much closer. Cool. I'll take it as a good sign that you all voluntarily stayed a little late. So did you guys find this example useful? |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 30_Radiation_Dose_Dosimetry_and_Background_Radiation.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: So, before we begin today's bit on dose, dosimetry, and background radiation, I promised you guys a story about how to use 22.01 to get out of Apartheid, South Africa. So, I got told this story when my cousin was about to get married because he needed a diamond and he was going to go buy one and his dad said, don't. So back in the 70s, when my uncle and his family were living in South Africa, anyone who wasn't Dutch white, so that would be blacks, Jews, including us, anyone else was considered second-class citizens by the Apartheid Government. And you were allowed to leave, because they didn't want you there, but you had to surrender all of your funds to the government in order to leave, which gives me everything you have and then you can leave the country penniless is not a winning proposition. So, my uncle and his brother devised a pretty brilliant idea to get their funds out of South Africa unnoticed. They were both dentists or radiologists or some sort of medical doctor that requires x-ray reading. So, one of them gave the other one all of his money, then reported bank statements to the Apartheid Government and said, I'm pretty much penniless. My practice went broke. I want to leave the country. And they said, OK, get out of here. So he went to the US, established a dental or radiological or some other sort of practice, I forget which one, and started requesting his brother, back in South Africa, to send him x-rays to read to help boost his business. Because that was their actual business, it was pretty legit. So then my uncle would send him packets of x-rays to interpret and send back. Except that there would be 10 x-rays on the front and 10 on the bottom, and the middle 80 would be hollowed out. AUDIENCE: Oh! MICHAEL SHORT: Now that would have ordinarily have tripped off alarms because any change in density would trigger a change in x-ray contrast. Because these packages were being inspected by x-ray, and if it looked like these x-rays were hollow and you were smuggling something out, they'd be caught and confiscated. So what sort of materials are valuable that you find in South Africa, that are pretty similar in x-ray contrast to other light media like film? AUDIENCE: Diamonds. MICHAEL SHORT: Diamonds. So the remaining brother went and converted all of his life savings into diamonds, which is something that you can do in South Africa because this is where diamonds come from. He then slowly, over a period of months or years, sent packets of hollow x-rays full of diamonds to his brother in the states, knowing full well that the mass attenuation coefficients of soft tissue and carbon are pretty similar and so are their densities. So their total attenuation coefficients are pretty similar, to. Let's pull those up so we can check it. Carbon graphite. I'm going to add a new tab and bring up soft tissue and we can compare them. Let's see, what's the most similar-- so long as you can see it-- thing to film? What should we call film here? X-ray film? AUDIENCE: I don't even know what X-ray film [INAUDIBLE] MICHAEL SHORT: Photographic emulsion. How about that? AUDIENCE: Kodak. MICHAEL SHORT: Is there something Kodak? OK. Kodak, standard nuclear. I don't think that sounds right. Let's go with polyethylene and plastic. Carbon, plastic. Carbon, plastic. AUDIENCE: [LAUGHTER] MICHAEL SHORT: Carbon, plastic. Basically identical. So this is a way they were able to smuggle wealth out of the country without x-ray contrast tripping off the guards. And once all of the slave savings had been converted into diamonds, he then went to the government and said, I'm penniless. My practice went broke. I want to get out of here. And they said, good riddance. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah. So they were able to restart their life in the states with all of the money that they had had in South Africa. And when my cousin wanted to get married and said, I'm thinking of buying a diamond, my uncle just said oh, don't. He's like, what, don't get married? He said, no, don't buy a diamond. Here, let me take you to the diamond drawer. AUDIENCE: [LAUGHTER] MICHAEL SHORT: Yeah. Because there was some leftover. So he said, pick out an extra diamond. And that's the story of my cousin's engagement ring, as well as why part of my family's here in the States. AUDIENCE: [INAUDIBLE] That's like a very cool diamond story. MICHAEL SHORT: Yeah, very, very nuclear diamond story. Diamonds aren't forever, as the finish have shown us, but they can get you out of repressive regimes. OK, back to the dose stuff. So today, and for the rest of the course, we get into biological and chemical effects of radiation. As soon as this pops up. And so in the first slide is everything you need to know about dose and units. Don't worry, it's not up there yet. I know that different units of dose are a common point of confusion. So I wanted to put everything on one slide so you can refer back to it like a cheat sheet. So, you've probably heard of the roentgen before. You've definitely seen the roentgen as a unit because you've all looked in those pen or pocket dosimeter that you actually had at the nuclear reactor, when you guys took a tour of it and controlled the control rods. The roentgen is not really a unit that we use much anymore for very careful calculations because you have to do some tissue equivalency stuff in order to go from ionizations in air, which is what it actually measures, it's the amount of charge dissipated or built up in air, to some damage to soft tissue. And the way you actually calculate roentgens from first principles, linking the physics and the biology part, is remember this equation here? Stopping power, which is some energy transfer, divided by the energy required to make an ion. Each one of those will give one electron unit of charge towards [INAUDIBLE] coulombs. And so this is the direct link between the physics and the chemistry/biology in this course. It's not something that's done that carefully in any of the readings, which is why I'm going to harp on it here in lecture. These two parts of the course are often taught differently. And they're actually totally related and everything's all the same, which is kind of nice. You don't have to just relearn a whole new lingo and field. Then there is the SI units of dose. The ones that, when you do calculations in the homework and the rest of your life, I recommend that you use whenever possible because they're in units that we're familiar with, in standard units. The one where you start with all along is the gray. A gray is the simple measure of absorbed energy in joules per kilogram of whatever. And so calculating it is fairly straightforward, too. For example, if you want to know what sort of dose you would get in gray from absorbing gamma rays, you can use this old equation from the first third of the course. And if you integrate this from, let's say, over the range of whatever object or person you happen to be irradiating, you'll get some fractional difference in intensity. That, multiplied by the original intensity of gammas, which could be given in maybe gamma rays per centimeter squared per second, maybe times time to get total number of gammas per centimeter squared. All that multiplied by the energy of each gamma, divided by the mass of whatever is doing the absorbing, equals your dose in gray. So you can use all the old stuff from the previous parts of the course directly to calculate dose in gray. And this is the starting point for any calculation. If you don't know what tissue was exposed, what type of radiation provides what biological effectiveness, it doesn't matter. You just start here. Then the other unit you may have seen is a unit, not of energy absorption, but of increased risk for something going wrong in the biological sense. It's called the sievert. For simple things like whole body dose from gamma rays, sieverts equals gray, because sievert is multiplied by this quality factor, or this effectiveness factor. That Q factor is actually a couple of factors. There's a Q for the type of tissue and there's a Q for the type of radiation. And the total quality factor for whatever you're trying to calculate is just the multiplication of these two. And so it's fairly easy, if you're ever dealing with whole body gamma dose, gray equals sieverts. If you're dealing with pretty much anything different, gray does not equal sieverts. There'll be just some factor to add in, which can be looked up from a lookup table but, as I've told you before, I don't like that explanation, look it up on a table. We're going to explore why the lookup tables are constructed the way they are. And then there's the CGS units, the ones that are based in centimeter-gram-second, instead of kilogram-meter-second. The rad is a simple measure. It's, let's say, 100 rad is just 1 gray. Where the rad actually comes from is, it's defined as 100 ergs absorbed energy per gram, where an erg is 10 to the minus 7 joules and a gram is 10 to the minus 3 kilograms. And so, you can do the mental math there to make sure these all work out. And then a REM, a roentgen equivalent man, is just a hundredth of a sievert. So there's historical basis for this. Back in the day, more folks used CGS units. There's been a push to SI units, which I happen to like because everything works out and you don't have to remember things like 10 to the minus 7 joules or whatever like that. So I'd say, when in doubt, for equal comparisons always use the SI units and always start with the gray because that's something where you can take a physical calculation of energy per kilogram and go into some increased cancer risk by using the dose quality factors. So let's take a look at how these appear. First of all, gammas, x-rays, electrons, positrons of any LET. LET, as I mentioned before, is linear energy transfer. And to put this mathematically, you've actually seen this before, which would be some change in energy over some change in distance. It's the stopping power. It's not like the stopping power, it's the stopping power. So with the formulas you got in the second half of this class, you can calculate linear energy transfer. Now, why are these things given in, let's say, discrete tables, or now what's currently recommended is these functions? Does anyone have any idea? How many of your Core Seven friends know the formula for stopping power, or could parse it even, or understand it? You don't have to, not everybody has to. So, for the rest of us, there are simpler empirical relations or relations that get the numbers right that aren't necessarily based on physics. So a simple lookup table, for those who don't have time to take 22.01 or something beyond, is the easiest thing to do. And in most cases it works. It's not exact, but it's probably close enough. Given uncertainties in the amount of radiation that one could absorb or the weight of a certain organ or the energy of some x-ray tube, I think these empirical things are pretty much good enough. And what this tells you is that there is some effectiveness of different types of radiation and different energies of radiation at imparting energy to the parts of cells and organs that cause damage. To say that in a little smaller sentence, different energies of radiation can have different effects biologically, and these tissue factors account for that. It's also a table. I want you to keep in mind because you're going to have to do a calculation about it. The principle elements in soft tissue in unit density, otherwise known as number density, which you've seen before. If you want to calculate the dose using a stopping power formula to a human, you have to know what this human is made of and this is a pretty good assumption. And this is something you'll be doing on homework number 8, is finding the dose that you're giving to each other in a particular situation. If anyone's seen the particular situation, check OCW for last year's course and you'll see what that situation will be. Has everyone gotten your whole body counts at the EHS Office? If anyone hasn't, do them in the next week because you'll need that data for the homework. You'll also need this table. Because if you think about it, if you want to say, well, what's the damage by electrons to soft tissue? And you want to calculate this from scratch. You have these four number densities, so we'll keep those in account. And let's put up the formula for ionization stopping power again. Comes out with 4 pi, k not squared, little z squared, big Z, number density, e to the fourth, over MeV squared, log. Let's see, what goes on top again? Oh, yeah, 2MeV squared, over the mean ionization energy. The nonrelativistic form. Which of these terms vary depending on the atom that an electron or whatever would strike? Let's circle them, there's a couple. AUDIENCE: Big Z. MICHAEL SHORT: Big Z, what the electron's hitting. N, the number density. AUDIENCE: Ionization potential. MICHAEL SHORT: Yes, the mean ionization potential. So these three terms are the only things that change when you're doing a stopping power to dose calculation. So if you want to get the total dose in gray to a human, you have to sum over these four different isotopes. Actually, wait. Let me put all the other junk in front to make it quicker. So the 4 pi k0 squared little z squared e to the fourth over MeV squared is just a constant, times the sum over all your isotopes of zi, the number density of isotope i, which inside there has the element fraction of that, times the log of 2 MeV squared over the mean ionization potential of isotope i. And in this case, notice there's no isotopes given. They're just given as elements. Why do we do that? Why don't we care for humans? How many isotopes of hydrogen tend to exist in you? One, pretty much one, to about five significant digits. You all have a little deuterium. Something like 1 in every 20,000 atoms of hydrogen is deuterium. But it's not a lot. I think it's even less than that. Carbon is just carbon-12, except for the tiny amount of carbon-14 you use for radio carbon dating. Oxygen, it's oxygen-- think what, 16? Nitrogen is nitrogen-14. So as long as you know what isotopes to use, you'll know what z's, what i bars. And the number density is given here. So this right here is how you calculate dose in gray to a human over some distance. Then all you'd have to do is integrate that over your-- let's say thickness of the human, whatever that happens to be. And you get the total amount of dose imparted to them. So a lot of questions came up in last year's class. How do we actually do these calculations? Well, this is how right here. First, separate out everything that's a constant. Because you only have to calculate it once, as I hope problem set 7 and 6 taught you guys, is separate out whatever you can first, and don't repeat yourself. Then sum over all the things that are unique to each isotope. And inside this number density is the fraction of that isotope in every human. So it's all built in. So these calculations aren't that bad. Since you know how to do stopping power, you can take out 2/3 of the terms. I don't know why I still have this. It's not terrible. Can anyone not see through how to do this? Or yeah, have a question? AUDIENCE: Where do we find the ionization term? MICHAEL SHORT: The mean ionization potential can be usually approximated as about 10 electron volts times z. Except for the very light isotopes like hydrogen, it's somewhere between 10 and 19 eV. I would say one, you can just look them up. Or two, you can use this empirical relation to get a good approximation. And empiricism definitely enters into the biological world, because uncertainties abound. And it's not always worth being ultra crazy exact, although it can't hurt. Any other questions on how to actually carry out a dose calculation using stopping power? Hopefully it's pretty straightforward. Guess we'll find out on the homework. The other quality factors to mention-- there are some different ideas about these quality factors. Notice the scales are fairly coarse. So again, there's a lot of uncertainty or slop in these values. But notice that for, let's say, X-rays, gammas, betas of all energies and charges, the quality factor is 1. Why do you think that is? Let's go for the case of X-rays or gammas. What tends to be the attenuation coefficient of any photon in soft tissue of considerable energy? Pretty low. And the amount of energy that can be transferred by those photons is variable, anywhere from pretty low to pretty high. And so the resulting electron cascade isn't going to be that damaging. And it might not even be that localized. So let's say if you really want to know how much damage is it going to go do to the DNA of a cell, where it could mutate and cause cancer, not that much. Most of the gammas pass through you, and the ones that do get absorbed can have rather long energy deposition tracks. Neutrons, however, interact nuclear stopping power. Or let's say they just interact with the nuclei of you, or whatever they're irradiating. And so when you knock out a nucleus, it can then slam into other atoms, causing a huge and dense cascade of ionization. If that cascade happens to be near the nucleus of a cell, you better believe it's going to cause a lot of damage. And that's why these more energetic neutrons have a much higher quality factor, because they're much better at causing damage until you reach some sort of threshold. Why do you think it goes down at higher energies? Yeah. AUDIENCE: Well, because there's less of a probability that it would actually interact with [INAUDIBLE].. MICHAEL SHORT: Absolutely. Right around 2 MeV or 1 MeV, these cross sections tend to go down. If we look at the cross-section for neutrons in anything-- let's look at hydrogen. [INAUDIBLE] still got to do the screen cloning thing again. Bear with me. Let's look at any old cross section for, let's say, neutron scattering in, I don't know, oxygen. We've looked at hydrogen enough already. Cool. You can see that. Oxygen-16. We have incident neutrons, elastic scattering, the bouncing off. Let's look at the form of this cross-section. Right around 1 MeV, things start to dip. And so yeah, the probability of any sort of interaction is going to go down. In addition, the atom that struck has a much higher energy, and therefore a higher range. So chances are, even if the neutron strikes an atom near the nucleus, it will have a higher range and can travel farther before that secondary cascade ends up depositing most of its damage at a lower energy. So if you remember for the stopping power for ionization or nuclear stopping power, they both look the same. So I'm not going to label which one. It's fairly low at high energies, peaks at a rather low energy, and then comes down. So it's at the end of the range of whatever particle, whether it be the neutron or the atoms that it struck, that it does the most damage. So you can kind of think like all forms of radiation-- except for photons-- as armor-piercing bullets. They don't do damage right where they enter. They do damage right where they stop and explode. In the case of armor-piercing bullets, it's a literal explosion. In the case of neutrons, electrons, protons, heavy ions, it's at the end of their range they have the most stopping power. And that's where the dense cascade's going to be. So chances are, again, if a neutron happens to interact with the nucleus of a cell-- that's the simplest cell I can draw-- if a neutron comes in and strikes another atom, it may move far away before its armor-piercing explosion. Let's see the other ones. For protons, depending on which book you go to, you get a different effectiveness. It's higher than that of gammas and X-rays and electrons, because you get a big cascade at the end. And alpha particles, really, really, damaging. And energy is like even a few MeV, they have a very short range. They tend to deposit a ton of energy. And that's why they have a huge effectiveness. This is also why alphas are the-- that's the one cookie you don't want to eat, if you guys remember this, the four cookies problem. Never eat the alpha. It's not going to get in through your skin, but if it gets into your body and incorporated into the material directly surrounding your nucleus, that's how things go really bad. That's why smoking's so bad. Did anyone end up getting into a smoke shop to do that measurement? I did check the one by my house, and they just put up a sign that said no one under 21. So you guys are right about that. Weird law. Whatever. Now let's look at the tissue weighting factors. We've talked about the factors for different types of radiation. It also matters what tissue it enters. So for things like skin or bone surface, you think of these tissues as not that critical. If you get a scratch and you lose some skin, it's not that bad for the body. Same thing if a little bone chip flakes off. You might get shin splints, but it's not so bad. Not for the same reason. These tissues aren't dividing very fast. The surface of your bone, the hard part, is basically standing still. It's just calcified minerals with some osteocytes trapped in there. Not much happens. What's happening in these tissues constantly-- gonads, bone marrow, colon, lung, stomach? AUDIENCE: [INAUDIBLE] cells. MICHAEL SHORT: Yep. This is where stem cells and fat-- rapidly dividing cells-- tend to be found. So there's been some theories and some papers saying whatever cancer you're going to get, you probably already got it in the first few years of life, when you're just a big rolling sack of stem cells. This is part of why occupational hazards for infants and pregnant women are much, much lower, because they are giant sacks of stationary stem cells. And you don't want to irradiate something that's dividing really, really fast. And so the older you are, the more OK it is to get more and more radiation dose, because a lot of these effects take a long time to manifest. Because they all start with a single cell. And it takes a long time for that single cell to exponentially grow and divide over a longtime scale into a mass that be considered a tumor. So it's a little worrying to think, OK, probably most of the cancer I'll get I got when I was five. But then again, it means, don't worry about it. What can you do? There's what? AUDIENCE: It's free. MICHAEL SHORT: Yeah, it's free. It's done. Not all of it. You can still limit your dose, because well, one, any acute radiation exposure will have short term health effects. And two, your cells are still dividing. As we had a seminar speaker say, a biological organism at static equilibrium is not very interesting. It's dead. It's not dividing anymore. So your cells are still dividing. It still means you should minimize radiation exposure. But a lot of what happened already happened. And if you can see here, the pattern is the more rapidly the cells are dividing, the more it has this tissue quality factor. Because the more quickly these effects would manifest themselves, and the more quickly the cells may divide with that mutation before these cellular repair mechanisms fix that mutation before a division. We'll get into a lot of that when we talk about biological effects, probably next week. So now, how do you do a calculation of dose in Sieverts? First you can take the dose in gray, like we have over here, multiply by these radiation weighting factors, and you get a total amount of dose to that tissue, where that tissue may be a certain organ, a part of your muscle, whole body if it was a broad blast of radiation from a bomb far away or something like that. And you get these single doses to tissue, where each of these radiation equivalent factors can be equated to this average quality factor, where you integrate the quality factor as a function of length times the dose at that length. This is where the stopping power formula comes in, is you'll have a stopping power as the energy decreases, as the particle moves through the material. And its stopping power will change. So then you have to integrate that stopping power times some constants and stuff over that distance times the quality at that distance to get the simple weighting factor. You then take these weighting factors, plug them into your dose to tissue, and sum up all the exposures to different tissues. So each tissue will have its own dose in Sieverts. It'll have its own tissue weighting factor. You sum up all the tissues exposed, and you get the total effective dose to the whole body. This means that some organs incur dose faster than others for the same radiation exposure. So the whole body dose should sum up to 1. And I believe-- I remember doing this calculation, but it may be worth your while to just try it. These numbers should sum up to 1, because all of these individual organs plus the remainder of your body should constitute the whole body. And so this is, in a nutshell, how you do these dose calculations. So let's take an example from the actual reading. Let's say a worker gets 14 milligray of uniform whole body dose-- this would just be from background radiation, cosmic rays, food, whatever-- plus a targeted dose of 8 milligray to the lung from alphas, plus 180 millgray from betas in the thyroid. Anyone know why they chose alphas in the lung and betas in the thyroid? What's the most likely sources of those? AUDIENCE: Iodine and nicotine. MICHAEL SHORT: Iodine and let's say smoking. Yeah. Yeah, exactly. As we saw, a lot of the radon daughter products tend to be alpha emitters, and you tend to inhale those through the lungs. And iodine is a beta decay with about an eight day half life, and that gets preferentially absorbed in the thyroid. So this is a pretty realistic scenario. And we'd say, how much effective dose did this person get? To do this calculation, you first-- well, let's go through those steps. You look at the dose to each tissue times the effectiveness for that type of radiation. So the dose got 8 milligray of alphas. So you multiply by 20 for that quality factor. And the lung gets 160 millisieverts. You do the multiplication for the thyroid, the multiplication for the whole body. And then you do a summation of these single tissue doses here, here, and here, times the tissue weighting factors, which you then look up from that table or calculate from the cell division rate here, here, and here. And you get a total dose of 42 millisieverts. So I want to point something out to you guys. His thyroid got 180 millisieverts. He got 42 millisieverts. Interesting. Sounds a little counterintuitive. But in this case, these doses for each tissue are calculated for that tissue. So if you have a probability of organ failure or mutation, you can now then know how much increased risk he may have of thyroid cancer specifically, or sum it up to an equivalent whole body risk. And it just so happens you're only allowed about 50 millisieverts of exposure. So he'd only be allowed another eight. But we'll get into limits and what this ICRU whatever whatever means. I'll actually show you the document. It's also posted online. This International Committee on Radiation U. I forget what the U is. But this is the whole document that has the basis for these recommendations, the numbers, and the reasons. And that's all online for you guys to check through. So anyone have any questions on a example dose calculation? Cool. Let's look at some of the ways that you'd actually measure dose. One of them we looked at the first day of class, the old Chadwick experiment. Send radiation through a fixed amount of area so you know the flux. If you know the total amount of gammas is produced by this X-ray tube, you know the area, then you know the solid angle. You know the flux. Then you have just a free air chamber with a high voltage to suck up those ions before they recombine. And that's how you can calculate things like dose in roentgens. You also have pocket versions of these things, sealed tubes with two electrically insulated electrodes. And that cannot discharge unless ions in the gas allow them to. And this is the basis behind these civil defense air wall chambers, one of which I happen to have right here. So I want to pass this around and let you guys take a look at it. Like the ones you saw in the reactor, you can look through and see a little needle that will tell you the-- not dose, but the amount of ionization this thing has received in roentgens, which can then be equated with some calculations to dose in soft tissue. And also there's the base for the civil defense dosimeter. I want to tell you guys how it works. It's a battery. That's it. If you open that thing up, there's a couple of big T cells in there and a voltage divider. And you just turn the knob-- which sounds complicated, all you're doing is turning a potentiometer-- to decide at what voltage this wire end inside of the tube will be. And that voltage deflects the wire a little bit by just coulombic attraction or repulsion, bends it, and gives you the dose. And it stays there. Nothing can get in or out of that sealed chamber, except penetrating radiation. When gammas or neutrons get through, they can cause ionizations in the gas. Those gas ions can move to these electrodes, partially neutralizing them, making the needle tilt a little farther and a little farther. So the interesting thing is when that needle read 0 roentgens, it's fully charged. And as the detector discharges from radiation interactions, the needle moves higher on the roentgens scale. Because you can paint whatever scale you want on it, as long as the physics works out. So it's kind of neat to think that the charge is highest when the dose is lowest. Because all the charge is doing is deflecting that little needle. Quite simple design, and quite robust. These things were designed by civil defense in case the Cold War became a hot war, and they'd have to last for a very long time. So having had this one for about 10 years, I can tell you I've not yet filled the meter, which is probably a good thing. And I took that on flights across the world and hiking in Nepal, where the background dose was considerably higher. And the needle moved like half a roentgen for the entire three weeks up at elevation and plane rides. And I got more dose on the plane ride than I did on the hike. Interesting fun fact. That's pretty much exactly what you should see. And so Alex tells me these things are starting to get kind of rare on eBay, and that's a shame. Because this is the best possible teaching dosimeter there is. If you understand 8.02 and a little bit of radiation, you know how these work and can predict what the dose is actually going to be. Now, how do you do things like detect neutrons? If you want to detect neutrons, you want a good moderator. Because well moderated neutrons deposit all their energy in the detector instead of bouncing off, transferring a little energy, and leaving. If you want to know how many and what energy neutrons you have, you can fill a similar chamber that's got a high voltage. It's got a wire on the inside. But instead of air or some other gas in there, you can fill it with things like ethylene or propylene or some sort of very hydrogenous hydrocarbon gas, something full of hydrogen to act as a good moderator. That hydrogen ion then becomes a proton, which then damages a lot of other ions, causing an ionization cascade, and leading to some current pulse, just like any other detector. There's going to be some movement of current, which is picked up as damage. You might then ask also, why is that alpha source there? Anyone have any idea? That's your calibration source. So as the gas in this chamber is attacked by neutrons, you will be blasting some of the hydrogen atoms out of the ethylene or propylene or whatever gas you have. The gas will change in effectiveness over time. The energy of the alpha particles will not. So that is your absolutely fixed energy calibration source. So you know exactly-- if, let's say, 3.72 MeV alphas make a current pulse of a certain height, then you can equate that to a 3.72 MeV neutron that would strike a hydrogen atom. And because the gas degrades over time, you have to recalibrate with that built-in alpha. And any sort of shutter-- like a little piece of foil-- will block the alpha so that you can use it as a neutron detector. Quite clever design, in my opinion. It accounts for the degradation of the gas and the detector. Anyone seen one of these things before? Then there's the Geiger counter, which is an ionization counter run in avalanche mode. You can use these free air ionization chambers or other counters as an energy proportional counter, where a higher energy particle will impart a bigger ionization cascade. And you can then use that to get energy resolution. Or you crank up the voltage like crazy, so that any count of any energy causes an intense ionization cascade and a huge current pulse. And this is the basis behind cheap Geiger counters, like the ones we build in our department. Those old Soviet SPM 20 tubes that have actually been survived being stepped on and crushed. As long as the electrodes don't short out, they still work. So there's a few folks in the department that have clearly bent Geiger tubes. One of them looks like they've been chewed, but they still work. Because any sort of anything interacting with a gas will cause an ionization cascade, which is immediately sucked into the electrodes by a high voltage and collected as a current pulse And that's just a cutaway of what it looks like on the inside. And this is the circuit for a Geiger counter. There is a voltage, a resistor, a capacitor, and a tube. That's all you need. Everything else in the MIT Geiger counter is there to make lights and sound. It's just for fun. But the actual Geiger counter itself can be made incredibly . Compact the bigger the tube, the more radiation it will catch, just because of its size but otherwise, as long as you get the voltage high enough to cause this ionization cascade, it works. It's really, really robust. So I want to skip ahead for some of that stuff, and then talk about how do you measure dose in humans. Who here has seen one of these TLDs, or Thermoluminescent Dosimeters before? Our reactor trainees have. And you've worked on nuclear stuff too, right? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: You've got one in the vault. Wait, Kristen, what about you, since you've worked-- AUDIENCE: We had one with a little crystal in it. MICHAEL SHORT: That's exactly what this is. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah. And you can shake it. It rattles, right? AUDIENCE: Yep. [INAUDIBLE] MICHAEL SHORT: Exactly. So this is how these work. This stands for a Thermoluminescent Dosimeter. And converting from Latin to English, that means if you heat it, it produces light proportional to the dose that it gets. So these little crystals, aluminum oxide or some sort of assault or whatever have you, something that creates permanent ionized defects, they will relax when you heat them, giving off light. And then you use a light counter to tell how much dose this crystal has received. And by putting different filters in the way for different pieces and looking at the light coming from different parts, you can tell different types of dose. So one works for betas, because betas can get through the little hole in the detector, whereas this may help you figure out gamma rays or neutrons of different energies. And who here has seen one of these ring badges? Yeah, so again, the reactor trainees. Great. If you shake them, you hear a little rattling. Try it next time. They're usually pretty loose. It's a cheap plastic crappy casting with a thermal luminescent crystal in the inside. So again, shake it. It should rattle. If it doesn't, the crystal might be missing, and you should probably get a new one. So when you read a TLD, you use this fancy machine, all it does is heat it very carefully, and allow the electrons that are trapped at a higher energy to jump back down, emitting visible light. That's it. We've seen this process before. What happens after the photoelectric effect? Electrons fall down an energy and release light in the form of X-rays or visible light. Same thing. And then let's talk about, how can you use dosimeters for dosimetry in medical applications? And let's take the example of proton beam therapy, the new and more upcoming replacement to X-ray therapy. It relies on the fact that the stopping power of protons is extremely low, until they reach the end of their range, like we talked about here. So again, we use protons as armor-piercing bullets to get through the person, drilling a little hole in the process, and exploding once they reach the tumor. It's a nice quirk of physics. It's really an elegant use of the stuff in 22.01 to do damage where you want it to. There's all sorts of other methods of cancer treatment. Let's say you're simple, and you can just go with excision, which means cutting it out. Chemotherapy, which is a pretty nasty process, and is usually used as a backup for radiation therapy to catch whatever else might be floating. X-ray therapy, which is still used a lot, but hopefully will get phased out a bit. Brachytherapy, where you implant a little seed of a beta emitter into an area near where the tumor is. Let's say you have an easy route of entry, like that way. Then you can implant one of these seeds right where the tumor is. But if you can't have an easy route of entry, that's where proton therapy comes in. And as we've talked about X-ray therapy, if you want to damage the tumor more than the rest of the brain or the rest of the person, you have to come in from different angles so that the sum of all the dose to this target is less than anywhere else. And X-ray therapy tends to do a lot of damage. Well, you hear of X-ray therapy causing hair loss in people. Well, yeah. You're going through the head. It's not good for your hair follicles, either. And then there's proton therapy. We've talked about it before, but not in the strict medical sense. And actually I'm going to reveal to you guys an invention that we've got out of our department that might help this go a little better. The way it works is you start with a cyclotron, which I've already explained, something that accelerates charged particles to about 250 MeV. And we have one of these across the river at Mass General Hospital. Send them through bending magnets, and bend them up so that they hit the patient. Then you move the patient on this gurney or table so that-- and the entryf can rotate anywhere, to come in from any entry point, minimize the dose to the rest of the patient, while frying the tumor. AUDIENCE: Is that a scale diagram? MICHAEL SHORT: Yep, quite big. Yeah. It's pretty to scale, yeah. So time on this instrument runs in the thousands per hour. If you go through the back door and know the folks that run the thing and no one is not being used for cancer treatments. Proton therapy can run in the hundreds of thousands of dollars. This is one of the millions of reasons we have medical insurance. It's because when you need it, you want it. And these cyclotrons aren't cheap. The way they work is pretty simple. You inject ionized particles through these D magnets, and they go faster and faster and faster every time they cross this electric field. They bend at larger and larger tracks through these magnets as their energy increases. Than they exit out the other side, getting delivered to the patient. And why protons versus X-rays? Well, I made it quick Desmos graph. To say, let's say you started off with a equivalent dose of protons and X-rays, and you're trying to get to a 40 millimeter deep tumor. This is why. This is the amount of dose that the X-rays would give compared to the protons in this highlighted tumor region. Then you look at the dose where X-rays and protons give to the rest of the person. It should be graphically obvious. And you can do some tricks with proton therapy. If you have a lung tumor, you can vary the energy and time at each point, such that you give a uniform dose to the tumor. So you can move that [? Bragg ?] peak by degrading the proton energy, by putting filters in line with the beam. You don't tend to like to change the energy of the beam. So you can put things in the way of the protons to slow them down. And because when the stopping power is very low, the proton speed's very high, and forward scattering is preferable, putting things in the way of the beam pretty much just slows them down, but doesn't change their direction. That assumption breaks down when the energy gets low. But when the energy gets low, you better be in the tumor anyway. And you want them to change direction and explode out and blast everything in sight. The problem with proton therapy is that humans are not biological organisms at static equilibrium. In other words, they're alive. They tend to move. Breathing is something I like to do every few seconds. Swallowing, maybe once every couple of minutes. And most of your organs move around and dance without you controlling them. It's really hard if you're trying to hit a tumor on a moving target. That is the main problem with proton therapy. The solution right now, I like to call it spray and pray. You fire into the person, hope that things don't move. We know that they do though. Those proton beams are very narrow. And let's say, for abdominal patients, let's say you happen to be digesting something. Your intestines will just go krschlock like that and get lunch where it's going. This is one of those reasons that they say don't eat anything before these procedures. If you're not actively digesting things, then your abdomen won't be moving as much. Thoracic patients, better known as the lungs or your thorax, if you're from France. My wife likes to make-- I like to make fun of my wife. That's right. Because she still refers to this region as the thorax. And I was like, I am not a giant beetle. But it is the medical term for this. You tend to breathe. And if you actually measure how much you move when you breathe in and out-- let's say you're trying to fry a lung tumor. That's a tricky proposition. So how do you keep the protons on track? Ideally, some dosimeter would be able to determine absolute dose, and it would be able to-- where on this list of things is-- oh, there's even more things I'd want here. So ideally, if you'd want a proton dosimeter, you want it to be able to measure things, provide some data, not be orientation dependent, and things like not be toxic, be cheap to build, but also be able to turn on and off if the tumor moves out of range. And this problem hasn't been solved yet. Existing dosimetry methods include making calculations and hoping, which is what we do now. We do complex Monte Carlo calculations based on scans of the patient, try and map out how much energy is going to be lost and where to go in. There's conventional port films, which means you put a film on the entry of the patient, which gives you an idea of where the beam is, but not necessarily where the organ is. Let me get into some pictures of these things so you know what they look like. Anyone ever put one of these in your mouth before? The sort of electronic dosimeters, the X-ray imagers that you get at the dentist, you bite down on one of these, and you get X-rays of your teeth. That's great if you have a place to put them, but doesn't quite work for proton therapy. There are tissue equivalent gels, where you can cast a person in gel, fire in the proton beam, and see how deep it goes, and then hope that your tissue equivalent gel is equivalent to the tissue. Usually it's pretty good. There's silicon diodes. You can implant a tiny little diode or other semiconductor device near or in the patient, and measure the change in band gap, or the voltage required to turn on conduction in the semiconductor. The problem is you can only use them once. Once you irradiate a piece of silicon, it's irradiated. And then you would have to take it out and stick another one in with a big needle to keep going. There's optically stimulated luminescence, which means protons hit stuff. Stuff creates light. Light can be measured in real time. So you could implant a little crystal like this TLD or Thermoluminescent Dosimeter, attached to a fiber optic cable. And in that way, you can measure the amount of light and preamplify it. PM means-- what is it-- Photo Multiplier tube. And use electronics and software to calculate that light and turn it into dose. Problem is, this scintillation, it's not very strong. There's a lot that can go wrong between where the radiation is done and where it can be collected. Also implanted MOSFETs are these metal oxide semiconductor field effect transistors. Same problem. You can look at the change in difference-- I'm sorry, the change in band gap as you irradiate these things, or in the MOSFET voltage. But again, you can only use them once. It's not like you can reset them. So the problems with all of these is we don't know what dose the tumor gets. If we know how much we need to fry it, but we don't know if we fried it, the cancer could recur. Or it may not respond to the radiation. These are liability terms to say it didn't work, but you can't sue us, because we don't know why. And you don't know why. Or you may apply too much dose to the surrounding tissue and induce secondary tumors. This is one of those things that's not talked about very much, except in medical circles, and my entire family happens to be in medical circles. So they confirmed, yeah, this is true. We don't know how many times, if you treat a tumor, or do you induce another one that will pop up five years later in the same site. All you may think is, OK, it recurred, despite being dead for five years. It might not have recurred. It might have made a new one. You don't know the dose rate versus time. The existing in situ methods haven't worked very well. So we had another idea, that I'll go in the last negative 1 minutes, what we call the integrating F-Center Feedback Dosimeter. We just got the patent filed on this. Not accepted, but filed with the US Patent Office. It's pretty simple. You send in calibrated light into a little crystal of something that creates these color centers, or F-centers. When it's irradiated, look at the light coming out. See what's absorbed. You know how much dose you've received. And so look at these three parts. A is just an alkali halide salt, better known as table salt, sodium chloride. B is some biocompatible casing so your body doesn't reject it. Calibrated white light source, fiber optic connection cables, and a spectrometer to read the absorption. These can be little compact USB spectrometers. And it relies on what's called F-centers. When you irradiate ionic materials, they change color. These defects produced by, let's say, blasting out a chlorine ion or a sodium or potassium ion are optically active. Because you get differing regions of electron density. And you absorb certain wavelengths of light changing their color. If you then send calibrated light through it, you can tell what happened, how much dose there was. And F-center creation versus radiation is extremely well known. We've actually done some preliminary tests on the Dante accelerator here to show that the amount of dose that you give in a fractional cancer treatment on the order of a kilogray or so does cause the salt to respond very strongly and produce a color. And the best part is, they relax on their own. After anywhere from 5 seconds to a few hours, the color just disappears. Because the atomic defects relax. So you don't have to implant, remove, reimplant, remove. And you can enable dose rate information by putting multiple salts in a row that have different response levels to these protons. So by looking at the amount of absorption in each wavelength, you can tell not just the dose, but the dose rate. Calibrate your beam current. And I'm going to skip all the way to the end to say how this would work. So let's say you had one of these IF2D dosimeters implanted in your tumor, and your heart was beating, or your lungs were breathing, or you were digesting something. You could then feedback this information to the proton accelerator to shut the beam off when the tumor moves out of range and send in tiny little pulses to say, hey, are you there? Little micro second pulses to say, microsecond of protons isn't going to do much. But if it senses the IF2D back in range, it then blasts continuously. And as soon as the IF2D says no more dose, it starts just putting those wake up pulses back on. So this would be the first feedback way to apply proton therapy without screwing it up. You could also install IF2D dosimeters near the tumor, outside the tumor, and play the world's first game of radioactive proton Operation. Don't hit the sides. If you wonder if your beam's on target, you then steer it until it doesn't hit any of the IF2Ds, and you know you're right through the gates on the tumor. Very important for certain sensitive tumors, like chordomas, spinal cord tumors, which tend to happen in infants and young children. There's various ways of treating those, other than removal of the neck, what you don't want to do. They're extremely difficult to operate on. You don't want to give radiation therapy. So highly targeted proton therapy like this, making sure that you fry the tumor without the surrounding spinal cord and medulla, would be probably the way to go for this. So I'm going to stop there, because it's exactly 10. It's also the perfect stopping point. And we'll pick up with background radiation tomorrow. I also want to let you guys know that we'll be doing our nuclear activation analysis irradiations Friday at the beginning of recitation, and we'll finish up recitation by doing the exam review. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 3_Nuclear_Mass_and_Stability_Nuclear_Reactions_and_Notation_Introduction_to_Cross_Section.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: So I'd like to do a quick two or three minute review of the stuff we did last time to get you back into where we were. We were talking about different types of technologies that use the stuff you'll be learning in 22.01. Everything ranging from nuclear reactors for producing power, and the Cherenkov radiation that tells you-- well, that the beta particles are moving faster than the speed of light in water. It's a neat thing, too. I've actually been to this reactor at Idaho, to the spent fuel pool. Even the spent fuel, once it comes out of the reactor, is still giving off betas and still giving off Cherenkov radiation. And you can tell how long it's been out of the reactor by how dim the glow gets, which is pretty cool. So you can tell how old a fuel assembly is by the blue glow. So remember, I told you guys, if someone says, oh, you're nuclear, do you glow green? You can be like, no, it's blue. That's the right way of things. We talked about fusion energy and got into some of the nuclear reactions involved in fusion energy. And I'll be teaching you more about these today. Why fission and fusion work, it all has to do with the stability and binding energy of the nuclei involved. And that'll be the main topic for today, is excess mass, binding energy, nuclear stability. We looked at medical uses of radiation, from implanting radioactive seeds called brachytherapy seeds in certain places to destroy tumors, to imaging, to X-ray therapy, to proton therapy using accelerators, or cyclotrons, to accelerate protons and send them into people. If you remember, last time we ran the SRIM code of the stopping range of ions and matter, and actually showed that protons all stop at a certain distance in tissue, depending on their energy and what you're sending them into. Let's see-- we talked about brachytherapy. We talked about radiotracers, and this is going to be one of the other main topics for today, is these decay diagrams, and figuring out not only what products are made, but what energy levels do the nuclei have, and how do you calculate the energy of the radioactive decay products and the recoil nuclei, which do take away some of the energy. We talked about one way to get rich. If you guys can figure out one of the ways to solve this moly-99 shortage. Right now, it's mostly made in reactors. The future has got to be accelerators or some sort of switchable device where you don't need to construct a reactor to make these medical isotopes for imaging, and tracers, and such. And finally, we got all the way up to space applications, shielding, crazy, different types of shielding, like electromagnetic shielding, to protect from high energy protons, all the way to radiothermal generators, which use alpha decay to produce a constant amount of energy on the order of one to 200 watts for like 100 years. And finally, to a different configuration of nuclear reactors, where you can design them to produce thrust, not necessarily electricity. And that's where we stopped on Friday. So let's move on to one of the things I'd alluded to earlier, which is semiconductor processing. This is actually a diagram from the MIT reactor, because we have this beam port here. Has anyone got to see the silicon beam port at the MIT reactor? Oh, seven, eight-- OK, about half of you. For those who haven't, who here has not had a nuclear reactor tour? Oh man. OK. Well, you'll get one when you get to control the thing in early October. So you actually get to go down to the control room and see the rest of what's going on. So make sure to ask them, show us the beam port for silicon ingots. And I think I already told you the story about the poor UROP who held the ingots up to their chest, getting about 10 months of dose, which is not dangerous, but it meant that for 10 months, they could have no radiation exposure, and they had to answer the phone. So that's how we ensure safety around here. There's other ones that-- applications that you're probably carrying around in your pocket. You can use the fact that charged particles have very finite ranges in matter to separate little bits of that matter from other things. So this is actually how single-crystal sapphires can be separated in little slivers for protective phone covers, because sapphire is one of the most-- the hardest, or the most scratch-resistant materials there is. Single-crystal sapphire is exceptionally strong, and optically transparent, and expensive. I know that because on one of our experiments, we use a single-crystal sapphire window to see into reactor conditions at like 150 atmospheres, and 350 degrees Celsius, and pretty corrosive chemistry. So you want to use as little as possible. So you can use a big, expensive accelerator to limit the amount of sapphire that you use. And this is actually done here in Boston. There's a facility not far from here that uses an accelerator. And this is their super detailed diagram of what the radiation looks like-- yeah, whatever. But what they do is they take-- they accelerate protons. They send them through bending magnets to steer that beam path. And then they send them into a large piece of single-crystal sapphire, which is exceptionally expensive to make. And they can actually lift off a thin sliver with micron precision. The reason for that is the same reason that we showed with that SRIM code. If you have this exact energy of protons going into well-known matter, you know what its range is going to be with an uncertainty or so of about a micron. So you can have things that come out thin, uniformly thick, and smooth, right away. There's some other really wacky products-- like has anyone heard of these betavoltaic batteries? No? They rely on beta decay or the direct capture and electricity generation from a radiation source like radioactive tritium. So in this little chip is about 2 curies worth of tritium. You guys will learn in about a week, how to go from activity in curies to mass, or something like that. And so this chip actually contains a lot of radioactive tritium that directly creates electricity. So you can hook into that chip and produce nanowatts for years. So it's one of these batteries that lasts-- well, as long as a couple half-lives of the isotope that's inside. Now there's a trade-off here. The shorter the half-life, the more active a given isotope will be for the same number of atoms, but the shorter it will last. So you can have higher power for lower time, or lower power, higher time. It's the classic energy trade-off-- works the same way with irradiation. And so now I wanted to get into some of the more technical stuff, where we'll be talking about nuclear mass and stability. And this is where the nuclear stuff really begins in 22.01. First, I want to make sure that we all agree on notation. So I'll be writing isotopes in this sort of fashion, where we refer to A as the atomic mass, or just the total number of nucleons. This is not the exact mass of a nucleus. It just refers to the sum of the protons and neutrons in the nucleus itself. And a lot of what we'll be talking about today is the difference between this nice integer mass number, and the actual mass of the nucleus, and that difference is given by the binding energy, or the excess mass, which are directly related. Z is just referred to as the atomic number, or the number of protons. It's what makes the element what it, which makes the name kind of redundant. But it's-- humans learned by association. It's easier to remember element names or symbols than which element is which just by the number of protons. So a lot of times we'll use the name, or at least the symbol just so we know what's going on. And anything up here is some sort of a charge. I do want to warn you guys of the dreaded multiple symbol use or multiple use of symbols. I'll try to stick for a lower case q-- will be charged. And uppercase Q is going to refer to the Q value or the energy consumed or released by a nuclear reaction. So they're both Q's but we're going to keep one upper and lower case. And like we mentioned before, let's say we were to write a typical nuclear reaction, like the capture of neutrons by boron to produce lithium-7, helium-4, better known as an alpha particle, and some amount of energy. There's two places where we actually use this reaction. One of them is as control rods. A lot of reactors use boron for carbide, or this compound B4C, which is conveniently solid, fairly dense, and contains a whole lot of boron in one place. Specifically, enriched in boron-10, because boron-10 has a high cross section, or probability, for neutron capture. And the other one is in what's called boron neutron capture therapy. Have I discussed this with you guys already, BNCT? Good, because that's what we'll be talking about for a few slides. And to write this whole reaction is the same thing as writing this shorthand nuclear reaction. So this is often how you see them in the reading, and in papers, because it's shorter to do that. But it's the exact same thing. So I have a couple of questions for you guys then. I have this extra Q here. Where does that Q actually go? So let's say boron and neutron absorb, it produces two nuclei with different binding energies. What happens to the excess energy created from the conversion of mass to energy? Yeah, Alex? AUDIENCE: That could be heat. MICHAEL SHORT: Yep. And heat, and more specifically? AUDIENCE: Kinetic energy. MICHAEL SHORT: Kinetic energy of the radiation released. And so that kinetic energy is actually used to our benefit in BNCT, or Boron Neutron Capture Therapy. The way this works-- once I hit play-- is you can either-- you can use any sort of source of neutrons, either a reactor or an accelerator, through a lower complex chain of events, like this. In this case, an accelerator-- so you don't need a whole reactor-- fires a beam of high-energy protons into a beryllium target. Does that sound fairly familiar? Firing something at beryllium, releasing neutrons, like what Chadwick was doing? Except he was firing alpha particles into it. This releases neutrons. What they don't have labeled here is slowing down stuff, or probably hydrogenous material, so that the neutrons slow down to a lower energy. And their probability of capture increases or their cross section increases. And if you don't know what a cross section is, the definition is two slides away. And the idea here is that these neutrons then enter the brain or wherever the tumor happens to be. And we rely on the fact that tumor cells consume resources much faster than regular cells, especially neurons, which after you're about five, don't tend to grow very much. So it's all downhill by the time you enter kindergarten. And we use that to our advantage so that the neutrons coming in will hit the cancer cells, which will preferentially uptake the borated compounds, leaving most of the normal cells intact. And the difference in dose can be a factor of 5 or a factor of 10. So that the cancer gets fried while doing as little damage as possible to the remaining brain cells, of which we have fewer and fewer every day. So I say statistically speaking you guys are probably smarter than me if we go by number of neurons in your brain, because I think I'm the oldest person in the room. And so now we can start to explain, how does BNCT work, and why did we make the choices that we did? For example, they use 30 MeV protons in order to induce these neutrons. So we have a nuclear reaction that looks something like this. We start off with beryllium-9 plus a proton-- let's just call it hydrogen to stick with our normal notation-- and becomes-- well, can someone help me balance this reaction? We know we get a neutron. What else is left? So Monica, what would you say? AUDIENCE: Let's see-- MICHAEL SHORT: Even just say number of protons and neutrons, and we'll figure out the symbol later. AUDIENCE: Number of protons should be-- MICHAEL SHORT: Sorry, oh, that's a 4. AUDIENCE: The number of protons should be five. MICHAEL SHORT: Yep. Five protons, which means it's boron. And number of neutrons in the nucleus? Someone else? Yeah? AUDIENCE: Nine. MICHAEL SHORT: Nine. So we have boron-9. Not a stable isotope of boron, but it doesn't really matter, because boron-9 almost immediately decays into an alpha, and an alpha, and a hydrogen. But this nuclear reaction right here is what we'll be studying for a little bit. And there'll also be some amount of energy. And this Q can actually be positive or negative. No one said there had to be energy released in a nuclear reaction, because in this case, we actually start off with 30 MeV protons and roughly zero MeV beryllium. If you want to get really exact, it's on the order of about 0.01 eV, which is why we neglect the kinetic energy of beryllium at room temperature. There are other reactions that when you fire a proton into them will produce neutrons, such as the absorption of lithium. But can anyone think of why we'd want to use beryllium instead of lithium? Kristin, what do you think? What could be a bad thing about using lithium? You ever throw in water? AUDIENCE: No. MICHAEL SHORT: OK. Then I should show you what happens when you throw it in water. There's a few bad things about lithium. It does this when you throw it in water. It's one of the alkali metals. It's got an awfully low melting point. It reacts with oxygen to produce an oxide almost instantaneously. So if you ever take a lithium battery apart, which you shouldn't, but if you watch the video of somebody else doing it, you'll see that the lithium foil turns black almost instantly. It also has a pretty poor thermal conductivity and doesn't hold that structural integrity when it melts. So it's not that good of a target to use. Beryllium's pretty cool in that it's the lightest structural material there is. Folks tend to make satellites out of it, because it costs a lot of money to launch things into space. And if you want something that has a high melting point, and is light, and is structural, beryllium's your way to go. It also happens to be a great neutron generator. And then why 30 MeV? In this case, we're going to use a table called JANIS, which I've got open over here. And I just have to clone my screen so you can see it. This is a resource that I think you guys are going to be using quite a lot in this course. We have a link to it on the learning module site. And I'm going to show you how it works right now. So I tend to use the web version because it works on any browser, any computer. And now you can start to pick which nuclear reaction you're looking at. And you can get tabulated cross sections. So I'm going to start by zooming all the way out. We can pick our incident particle. Since in this case, we're looking at the firing of protons into beryllium, I'll pick the incident proton data right here. There's a lot of different databases with sometimes conflicting information. I tend to go with the most recent one you can find. And click on cross sections. And this is, again another table of nuclides, anything in green there's data for. Anything in gray, there isn't. So let's go all the way back to the light nuclei, zoom in, go back down to the light nuclei again until we find beryllium-9. Double-click on that, and let's look for the anything cross section. And this is a pretty wide energy scale. So you can actually change your X minimum and maximum. So let's change it to a minimum and maximum-- I don't know-- a maximum of 50 MeV. We don't have to see all of that other stuff going on. 50 MeV and maybe a minimum of 10. If you notice-- actually, I'll go back to 1. And I want to point something out. You can actually get a good yield of beryllium. Let's see-- you can actually get a good yield of neutrons by firing protons at beryllium in lower energies. But I notice there's this interesting feature right around there. The cross section's flatter. And so if you want to get an-- ensure that you get the right dose, you might want to deal with a flatter cross section or a flatter probability region, so that you have something more predictable instead of in a really high slope region. But some of these nuclear reactions actually take extra energy in order to move forward. And we'll show you another example pretty quickly. Let's go back to our slides here. Then another question is, how does the boron only get into the cancer cells? Like we mentioned before, cancer cells are actively growing, which means they need a very large and active blood supply. And so it's one way for things to, let's say, not quite cross the blood-brain barrier. If the cancer cells are growing and your neurons aren't, then your cancer cells are going to use more energy, take in more sugar, which might be doped with boron, or some other compound doped with boron, and that's all you can get the boron into the cells that you want. And then why was boron selected for the therapy? Let's think about that. What happens after the neutron is created? And let's write the next stage of the reaction. In boron neutron capture therapy, we rely on doping the patient with boron-10 to release an alpha particle, and lithium-7 and a gamma ray. So now what we can start doing is look at the table of nuclides, which I'm going to teach you how to read now, to figure out-- let's say that this neutron had an energy of about zero eV and the boron nucleus had an energy of about zero eV. And in the end, all this stuff here has gained or lost some sort of energy cue, Q. And today we're going to teach you how to calculate this Q. So I want to skip ahead to how to read the table of nuclides. So there's all-- this is like the poster you'll see in every nuclear building. It's kind of what makes us, us. What you'll notice is that there's a whole lot of nuclei at the lower left. They are the light ones. At the upper right, they are the heavier ones. And they're colored by half-life. In general, the blue ones will be stable, and the further away you get from blue, the less stable they get. So right away, without even delving deeper, what patterns do you guys notice here? Yeah, Alex? AUDIENCE: As they get bigger, heavier, they're more unstable. MICHAEL SHORT: Yeah. There's a whole section where there's no more blue. There are no stable elements. So stability drops off after a certain point. And what about in the region of stable isotopes? Does anybody notice any repeating patterns here? Take a look at every other row. There's a bunch of blues and then one, and then a bunch and then one, and then more and then none. That must be technetium, because that's the only element around there that doesn't have any. And then a bunch of blues. So every other row-- and in this case, it's increasing number of protons-- has more or fewer stable isotopes. It turns out that the even numbered isotopes have a lot more stable ones, for reasons that we'll get into pretty soon. If you zoom in a little bit, you can see all the different isotopes so you can select which ones you want. And again, if you look really closely, that's-- let's say, neon right here has got a few stable ones. Sodium has one. Magnesium as three. Aluminum has one. And this pattern repeats all the way up to the point where you don't really get any more stable isotopes. If you double-click on one of them, you get all the information that you'll need for the next three or so weeks of the course. So in this case, I picked on sulfur-32, one of the stable isotopes of sulfur. So if you notice it doesn't have any decay mechanisms here, but it does say its atomic abundance. So you can know how-- what percentage is normally found in nature. And then there's a few other quantities that is going to be the topic of what's going on here. Let's start with the atomic mass. If you notice, the atomic mass is slightly less than 32, 32 being the mass number, or the total number of protons plus neutrons in the nucleus. The actual mass is a little lower by that amount right there, the binding energy. It might be a little funny because I've given you a mass in AMU, and a binding energy in kiloelectron volts. I want to remind you that these are the same thing. The conversion factor you'll be using over and over again throughout this course, especially on the next p sets, is one atomic mass unit is 931.49 MeV c squared. Yeah-- I'm sorry. Yeah, never mind, put that there. So then, again, one, don't round-- because we've had times when folks said, ah, this is about 931. And when you're off by half an MeV, you could be at a totally different decay level or get a positive Q when it should be negative, or vice versa. And let's take a quick look here to say, if this atomic mass is 31.9720707 AMU-- this is why I brought a calculator. Normally I do mental, math but since I told you guys don't round, I can't do eight significant digits in my head. So I'm going to get that in there-- 0707. If any of you guys want to follow along, I encourage you to. And say minus 32, which is the mass number. So in this case we're taking the actual atomic mass minus the actual-- the mass number. In this case, it's 32. In this case it's, 31.9720707. And we end up with minus 0.0-- I'm going to put all the digits here-- 293 AMU. If we convert this to MeV-- times 931.49, we get minus 26.0159 MeV. See this number anywhere on the KAERI table? Right there-- that's the excess mass. And in this case, we usually give this the symbol delta for the excess mass. And these are how these quantities are directly related. The excess mass-- well, actually, what does the excess mass really mean? It's the difference between the actual mass and a fairly poor approximation of the mass. So the excess mass doesn't really have that much of a physical connotation. But it is nice, because if you know very well the tabulated atomic number-- I'm sorry, the-- yeah, the mass number and the excess mass, you can figure out-- let's see-- yeah, you can figure out what the real atomic mass is. And I want to switch now to the actual table of nuclides and show you one example. If you want to very quickly jump between isotopes, you can type them in right up here. And does anyone know what the gold standard for atomic mass is? And I'll give you a hint, it's not gold. Yep? AUDIENCE: Carbon-12. MICHAEL SHORT: Carbon-12. What do you think the excess mass of carbon-12 is going to be without doing any calculations? AUDIENCE: Zero. MICHAEL SHORT: Exactly. Zero. So if we go to carbon-12, because that is set as the standard, the way atomic masses were done was carbon-12 weighs exactly 12 AMU. The excess mass here is zero. And that's why the atomic mass is 12.0 to as many decimals as we care to note. So is everyone clear on what excess mass is? Yep? AUDIENCE: What's the point of c squared for that conversion? MICHAEL SHORT: So mass does not actually equal-- oh right, and it's actually on the-- where did my chalk go? It's actually down below. The point is that energy is related to mass by c squared. So they're not the same units, but they're directly convertible. AUDIENCE: OK. MICHAEL SHORT: Yep. And so this way, you have an E over a c squared. You get an m, and there we go. I had the units upside down. However, carbon-12 does not have a zero binding energy. Yeah, Luke? AUDIENCE: How come when you did that calculation, you didn't use the c squared? So like, it seems like then that would be 26.0159-- MICHAEL SHORT: MeV per c squared. Yeah. AUDIENCE: But they don't say that up there-- or it didn't say that on the table. MICHAEL SHORT: Yeah, that's true. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: So it is funny, right? The binding energy is give it in keV, and that's correct. An energy is an energy. An excess mass, it really should say keV per c squared, because if we're talking in units of mass, it's got to be in m. Or in this way, you could say m is an energy per c squared. So this, to me, is a semantic inconsistency in the table. But you guys will know that a mass is always going to be an AMU, or kilograms, or MeV per c squared. And energies will be in MeV, keV, some sort of eV, usually, in this course. The binding energy, though, that's correct. That's in keV, because that's an actual energy. Now then the question is, what does the binding energy actually represent? Does anyone remember from Friday or Thursday? I can refresh your memory, because that's what I'm here to do. The binding energy is as if-- let's say we're assembling carbon-12 from its constituent nucleons. There's going to be 12 of them. Let's say we had six protons and six neutrons. We can calculate the total mass energy of this ensemble of nucleons when they're infinitely far apart from each other. And forgive the little-- it's not to scale. But they are infinitely far apart from each other. And we can say that-- let's say there is Z number of protons. So we'll say the binding energy is the number of protons times the mass of a proton plus the number of neutrons-- A minus Z-- times the mass of a neutron minus the energy of the assembled carbon-12 nucleus. So there's actually a measurable difference in mass between six protons and six neutrons, and the actual mass of a nucleus with atomic number A and-- I'm sorry, with atomic number Z and mass number A c squared. So is everyone clear on how we arrived at this formula? It's effectively the energy released when you take the individual nucleons, assemble the nucleus. You don't have as much mass as when you started. Or in some cases, you might have a little more mass than when you started if things are particularly unstable. And you can use the excess mass and binding energies in relative amounts to see, is a nucleus going to be stable? For example, let's look at iron-55 I'm going to jump here, make it a little bigger so the important stuff is easier to see. And if you notice, the binding energy of iron-55-- there's quite a bit of it. It's very well-bound. In fact, this is one of the most well-bound nuclei in the whole chart of nuclides. Let's look at something that we know to be particularly unstable. Someone have any idea? Let's just add like 20 neutrons to iron let's see if it even exists. No-- doesn't happen. Let's try adding 10 neutrons to iron-- or go even crazier. What about 70? Too small-- all right, let's meet somewhere in the middle-- 68. Still a pretty high binding energy, but you can look at it as a difference in binding energy per nucleon. So in this case, the binding energy per nucleon-- if you take the binding energy and divide by the total number of nucleons, will give you a relative measure of how tightly bound that nucleus is. Now these are not absolute things. You can't just say, certain binding energy leads to certain stability, but they do give you pretty good trends to follow. And we're actually going to be coming up with-- probably on Thursday-- a semi-empirical formula to get the rough binding energy of any particular assembly of protons and neutrons. And it follows experimental calculations pretty well-- surprisingly so. I want to jump back to here, because I've mentioned cross sections, and I want to actually define what a cross section is, because this is a quantity that you're going to be using everywhere. Let's say that we fired a beam of particles-- it doesn't matter what it is-- at a target of other particles. Let's say, the beam particles are atom A, and the target particles are atom B. And once these A particles pass through the target B, a little bit fewer of them come out the other side reacted, or unscathed, or unscattered. And some of them are absorbed, or scattered, or bounced off, or scattered backwards, or what have you. We can write the sort of proportionality constant between the change in intensity of our A beam and the thickness of our slab. And we give that proportionality constant this symbol, little sigma. We'll get something going up here. Little sigma, which we call the microscopic cross section. It's in effect, a constant of proportionality that relates the probability of absorbing an atom from this beam I-- or from this beam of atoms A through a slab of B. And then if you take this formula, you divide by that delta X-- so I'm going to take what's on there and say delta I over delta X equals minus cross section ABn-- which refers here to the number density. So I'll keep our table of symbols altogether so it's a little easier to follow. n is our number density, which means the number of atoms per unit volume. Usually, in nuclear quantities, we use centimeters because these are things that are actually fairly measurable, and cross sections are actually in units of centimeters squared. And let me finish that expression. We had the number density of our target B. We had our initial intensity, and that's it. Anyone know how to solve this differential equation? If we take the limit of small deltas, it should start to look like a differential equation. The final answer is up there on the screen. Does anyone remember the method to actually solve this differential equation? This is the easy one-- separate the variables. So in this case, we can divide each side by I of of X, multiply each side by X. I'm going to bring this up so I'm not bending down. So we have dI over I equals minus sigma ab n of B times dX. Integrate both sides and we get log of I equals minus sigma ab n of BX, and some integration constant. You can apply an initial boundary condition to say at X equals zero, the intensity of the being x was some intensity I naught. Whatever intensity of the beam that we initially fired at the target. And by combining these two, you end up with the expression you get right there, which is that the intensity of the beam coming out is the initial intensity times e to the minus sigma ab nbx. And we've kind of derived the idea of exponential attenuation. For those who haven't seen that word before, attenuation or the gradual removal of the beam of incident particles by whatever the target happens to be. This quantity right here, we actually have another symbol for it, which we give big sigma. And in this case, big sigma we call the macroscopic cross section. I'll draw a box around these so we know these are our symbols that we're keeping defined here. And so you may see that the microscopic cross section just depends on single reactions between the incoming atoms A and the target atoms B. The macroscopic cross section depends on how much B is there. So if you want to get per atom probabilities of absorption scattering, whatever thing you're looking at, you use the microscopic cross section. And if you have a finite amount of stuff there, and you know the number density of your substance B, you can use the macroscopic cross section to get actual total probabilities of beam attenuation-- or to calculate exponential attenuation. We're going to see this again in another form when we talk about designing shielding, and how much shielding do you need to remove how much of the beam? Well, this quantity right here, there's actually tabulated values for a lot of this stuff at the-- on the NIST website. And I have links to that as well on the Stellar website, so you can-- instead of having to look these all up on JANIS and multiply number densities, there are some easier graphical functions you can just find the value for. But we'll get back to that in a few days. So anyway, on reading the KAERI table, there's a few quantities right there. We've already defined what the excess mass and the binding energy is. And I want to note right here, if you want to actually calculate binding energies by hand, which I'm going to ask you to do a bit on problem set 2, you'll need to know what the mass of the proton, and the neutron, and the electron are to, again, usually like six or seven digits is the idea behind this course. Notice that they're not exactly one atomic mass unit, because one atomic mass unit, again, was set with that carbon-12 standard. I'm not going to use the word gold standard because that's a misnomer in this field. And so like I said, what does excess mass really mean, physically? Not much, because it's the comparison to an arbitrary standard or a rather poor approximation of the mass. The binding energy actually does represent the conversion of mass to energy when you assemble a nucleus like Voltron-style from its constituent nucleons. So let's try a few examples in class right here. I'd like you guys to follow around and try and calculate the binding energy of each of these three nuclei of sulfur. Let me get a better blank board so we can follow along. And there's a few different ways of calculating that binding energy. You can do it by the excess mass. You can do it by-- let's go back to the table of nuclides so I can show you how I would do it. Let's start with sulfur 32. And we'll write up the quantities that we're-- that we know. Let's say the excess mass is the actual mass minus the mass number. The binding energy is Z times mass of hydrogen plus A minus Z mass of a neutron minus the actual mass of that nucleus with AZ c squared. And then what we can do is rearrange this excess mass, isolating the mass term right here, and make a substitution. So we can say the mass is actually the excess mass plus A. Stick that in right there, and now we can calculate and confirm the binding energies that we see right here from tabulated excess mass values, atomic number, mass number, and the masses of a hydrogen atom and a neutron, which, for reference, I'll write up here as well. So the mass of a hydrogen is the mass of a proton plus an electron. So 1.0072-- 007276 plus 0.000-- make that a little easier to read-- how many zeros-- 00054858 AMU. Mass of a neutron, surprisingly close to Chadwick's prediction. 8664 AMU. So now I'll head back to the table of nuclides. And let's see if you guys can follow along. What we want to do is try to confirm this binding energy using the atomic mass, the excess mass, or if we don't even know the atomic mass, we can use the excess mass plus A right there. So let's see-- Z, in this case, for sulfur, is 16 times the mass of hydrogen. This is definitely a calculator moment, because like I said, I don't know about you guys, but I can't do eight significant digits in my head. 0054858-- 1.007855-- probably enough digits-- plus 16, because there's-- mass number here is 32. The atomic number is 16. That leaves us with 16 neutrons times the mass of a neutron, 08664-- minus the excess mass, which in this case is 26.015 MeV-- 015 MeV per c squared. So thanks for that-- thank Jared for that question because, indeed, the excess mass, if you want to write it in terms of a mass, should be in MeV or keV per c squared minus A, which is 32 times c squared. So let's do all this out-- shouldn't take too long. 007825 plus 16-- 1.008664 minus 32 minus 26.015 divided by c squared. It's basically nothing. Gives us on the order of-- let's see-- times c squared. What did we get right here? AUDIENCE: Is the 26 negative? MICHAEL SHORT: Ah, let's see. I believe it is, because we have to subtract the mass, and we're substituting in this delta-- AUDIENCE: Isn't the delta negative? MICHAEL SHORT: Oh. Good point. There is a negative there. So that's minus negative that. And A is 32. Thank you. Yeah, good point. Let me try this again. Ah, I know what I'm doing wrong. This part right here, we want to convert to AMU. So we can take our minus-- thank you-- 26.015 MeV per c squared and divide by our conversion factor, 931.49-- let's see-- MeV per c squared per AMU. What does that give us? 26.015 over that. 0.027928 AMU negative. Let's put that in and see how we do. So plus 0.027928 minus 32. And then we get 271.-- I'll just say 764 MeV. I think six digits is enough. The actual binding energy, 271.780 MeV, so we're off by 16 electron volts-- close enough. Also note that I used a five-digit accurate conversion factor. That might be part of the source of it. Does someone have a question? AUDIENCE: Yeah. In the equation on top, you did the atomic number times the mass of the proton, but in the one on the bottom, you used atomic mass times the mass of the hydrogen including the electron. Is there-- MICHAEL SHORT: Oh yeah. I actually added the two. So the mass of an electron, since it's got that extra zero, makes so much of a-- so the mass of-- oh-- yeah. The mass of hydrogen would be the proton plus the electron right there. AUDIENCE: Right. But why do hydrogen, though, if [INAUDIBLE] just the proton? MICHAEL SHORT: Oh, because there's an electron there, too. Now this can usually be neglected because it's such a small fraction compared to everything else. So now we're talking about-- what-- the fifth or sixth decimal place. But just for exactness, I stuck on it. Yeah, in your calculations, you can try with and without, and I think you'll find that it doesn't matter that much, because in the end we get the binding energy that we see on the table to within 16 electron volts for a total of-- yeah-- 271 MeV. That's pretty accurate. Yeah. AUDIENCE: If you wanted to calculate the like energy released from a reaction, would you do the binding energy for [INAUDIBLE] reactants that's trapped products for the reactants? MICHAEL SHORT: That's the next slide. We'll get right there. Yeah, so you're catching on to where we're going. So once you can calculate either the excess mass, or the binding energy, or the total mass of any nucleus, you can start to put them together into nuclear reactions. So since you asked, let's take a quick look at them. Where is our nuclear reaction board? Anyone mind if I hide this board for now, so we can go back to our original? OK. Let's take a look at this reaction right here, the actual boron neutron capture therapy reaction. And now we can get towards calculating this Q, what the difference is between the-- the total energies of the products and the reactants, and where does that go? So now we can either look up or calculate the binding energy of each of these nuclei, subtract off the energy of the gamma, which I've looked up already, is about 0.478 MeV. And we can figure out what the total Q of this reaction is. So in this case-- I'll skip ahead to the slide where I've got it because that way I won't write anything wrong on the board-- got everything right up here. We assume that both boron and the neutron have roughly zero kinetic energy. And at the end, they come out with some other kinetic energies as well as this gamma ray. The sum of this energy differences, we refer to as Q. And we can actually confirm this total Q with a few different methods. In this case, it's always conserve something. That's the whole theme of this course, is you can conserve total masses, you can conserve total kinetic energies. We may not know those, but tabulated in the KAERI table are the binding energies of each of these nuclei. So let's try that out right now. So let's look at the binding energies of each of these nuclei and see what the difference is, the total energy released. First of all, what's the binding energy of a lone neutron? Anyone have any idea? I see a lot of these-- zero. Yep. You haven't assembled an nucleus out of a lone neutron, so we'll go with the neutron has a binding energy of zero MeV. Boron, not quite the case, but we can go back to the table of nuclides and punch that in-- boron-10. We can look up its binding energy, which is about 64.7507-- I keep saying about, which is exactly 64.7507 MeV. And then our other two nuclei, helium-4-- so you can punch in helium-4 here. It's got a binding energy of exactly 28.295673 MeV. And finally, lithium-7, let's punch that in. I think you guys are going to get very familiar with this table. There's a few versions out there. There's a new slick Java version that I found a little hard to use. So I like the text-only version, because it's just as simple and fast as it gets-- 39.244526-- 526. So any sort of increase in total amount of binding energy between the reactants and the products is going to release or absorb energy. Now because boron does capture a thermal neutron, or a neutron with approximately zero eV of kinetic energy, does anyone have any idea whether this would release or consume energy? In other words, do think this is an exothermic or endothermic reaction? Yeah, Alex? AUDIENCE: I'm guessing that heat would be released through the material-- the capture material would be heated up. MICHAEL SHORT: OK. Indeed. If the total Q value is greater than zero, we refer to this as exothermic-- kind of like in chemistry. And if Q is less than zero, we refer to this as endothermic. So let's do our binding energy subtraction now. We want to figure out how much excess binding energy is released. So I'm going to take the reactants-- I'm sorry-- I'm going to take the product. So helium 295673-- add lithium 244526-- subtract boron 0.7507-- subtract the neutron, which is zero, and we're left with 2.79 MeV. And because it's positive, this is an exothermic reaction, which is what we'd expect, because this reaction actually happens. If this was an endothermic reaction, what could you do to make it occur? Yeah? AUDIENCE: Heat up the reactants. MICHAEL SHORT: Like with temperature, or what do you mean? AUDIENCE: Make them have higher kinetic energy or-- MICHAEL SHORT: There you go. So actually-- yeah-- you kind of said the same thing twice. Heating things up does give them higher kinetic energy. If you rely on temperature, you'll be imparting eV worth of kinetic energy. But if you accelerate them, or get them from a different nuclear reaction, and you get them up to the MeV level, where whatever this Q value could be might be negative, then you can get the reaction to occur. For example, what is the Q of that reaction? AUDIENCE: Negative 2.79. MICHAEL SHORT: Negative that. So in this case, if you want lithium to absorb an alpha particle, and make boron and a neutron, you would have to accelerate the alphas to that same amount of energy in order to get this to occur. So nuclear reactions do go both ways, just not as easily. Kind of like chemical reactions, you can drive them in different directions by changing the temperature or changing the concentration of the reactants. Here the concentration doesn't matter. But the kinetic energy related directly to the temperature definitely is. And so in this case, it's 2.79 MeV. If I tell you the gamma ray takes off 0.478 MeV of that, we're left with 2.31 MeV between the lithium nucleus and the helium nucleus. Now my next question-- my last question for you today-- oh man-- is what's the split? I think I don't want to keep you longer, because it's one minute of 10:00. So this is the question that we're going to pick up with on Thursday, which is how much of the energy is taken off by helium? And how much is taken off by lithium? Sorry, I should have kept better track of the time. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 11_Radioactivity_and_Series_Radioactive_Decays.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: So since I know series decay is a difficult topic to jump into, I wanted to quickly re-go over the derivation today and then specifically go over the case of nuclear activation analysis, which reminds me, did you guys bring in your skin flakes and food pieces? We have time. So if you didn't remember, start thinking about what you want to bring in, what you got. AUDIENCE: Aluminum foil. MICHAEL SHORT: OK, so you've got aluminum foil. You want to see what in it is not aluminum-- excellent. Well, what else did folks bring in? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: OK, rubber stopper-- sound perfect. Anyone else bring something in? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: OK, so tell you what, when you bring stuff in, bring it in a little plastic baggie. I can supply those if you don't have them with your name on them just so we know whose samples are what because that's going to be the basis for another one of your homeworks where are you going to use the stuff that we're learning today to determine which impurities and how much are in whatever thing that you looked at. And, of course, you're not going to get all the impurities because in order to do that, we'd have to do a long nuclear activation analysis, irradiate for days, and count for a longer time. So you'll just be responsible for the isotopes on the shortlist, which we've posted on the learning module site. So again, bring in your whatever, as long as it's not hair because, apparently, that's a pain to deal with or salty because the sodium activates like crazy or fissionable, which you shouldn't have, anyway. I hope none of you have fissionable material at home. So let's get back into series decay. We very quickly went over the definition of activity which is just the decay constant times the amount of stuff that there is, the decay constants, and units of 1 over second. The amount of stuff-- let's call it a number density-- could be like an atoms per centimeter cubed, for example. So the activity would give you the amount of, let's say decays, per centimeter cubed per second. If you wanted to do this for an absolute amount of a substance, like you knew how much of the substance there was, you just ditched the volume. And you end up with the activity in decays per second. That unit is better known as becquerels or BQ, named after Henri Becquerel, though I don't know if I'm saying that right. But my wife's probably going to yell at me when she sees this video. But so becquerel is simple. It's simply 1 decay per second, and there's another unit called the curie, which is just a whole lot more decays per second. It's a more manageable unit of the case because becquerel-- the activity of many things in becquerels tends to be in the millions or billions or trillions or much, much more for something that's really radioactive. And it gets annoying writing all the zeros or all the scientific notation. And so last time we looked at a simple situation-- let's say you have some isotope N1 which decays with the k constant lambda 1 to isotope N2, which decays with the k constant lambda 2 and N3. And we decided to set up our equations in the form of change. Everyone is just a change equals a production minus a destruction for all cases. So let's forget the activation part. For now, we're just going to assume that we have some amount of isotope, N1. We'll say we have N1 0 at t equals 0. And it decays to N2 and decays N3. So what are the differential equations describing the rate of change of each of these isotopes? So how about N1? Is there any method of production of isotope N1 in this scenario? No, we just started off with some N1, but we do have destruction of N1 via radioactive decay. And so the amount of changes is going to be equal to negative the activity. So for every decay of N1, we lose an N1 atom. So we just put minus lambda 1 N1. For every N1 atom that decays, it produces an N2. So N2 has an equal but different sign production term and has a similar looking destruction term. Meanwhile, since N2 becomes N3, then we just have this simple term right there, and these are the differential equations which we want to solve. We knew from last time that the solution to this equation is pretty simple. I'm not going to re-go through the derivation there since I think that's kind of an easy one. And N3, we know is pretty simple. We used the conservation equation to say that the total amount of all atoms in the system has to be equal to N10 or N10. So we know we have N10 equals and N1 plus N2 plus N3 for all time. So we don't really have to solve for N3 because we can just deal with it later. The last thing that we need to derive is what is the solution to N2. And I want to correct a mistake that I made because I'm going to chalk that up to exhaustion assuming that integrating factor was zero. It's not zero, so I want to show you why it's not now. So how do we go about solving this? What method did we use? We chose the integration factor method because it's a nice clean one. So we rewrite this equation in terms of let's just say N2 prime-- I'm sorry-- plus lambda 2 N2 minus lambda 1 N1. And we don't necessarily want to have an N1 in there because we want to have one variable only. So instead of N1, we can substitute this whole thing in there. So N10 e to the minus lambda 1t equals zero. And let's just draw a little thing around here to help visually separate. We know how to solve this type of differential equation because we can define some integrating factor mu equals e to the minus whatever is in front of the N2. That's not too hard. I'm sorry, just e to the integral, not minus, of lambda 2 dt. We're just equal to just e lambda 2t. And we multiply every term in this equation by mu because we're going to make sure that the stuff here-- after we multiply by mu and mu and nu and mu for completeness-- that stuff in here should be something that looks like the end of a product rule. So if we multiply that through, we get N2 prime e to the lambda 2t plus, let's see, mu times lambda 2 e to the lambda 2t times n2 minus e to the lambda 2t lambda 1 n10 e to the minus lambda 1t equals zero. And indeed, we've got right here what looks like the end result of the product rule where we have something, let's say, one function times the derivative of another plus the derivative of that function times the original other function. So to compact that up, we can call that, let's say, N2 e to the lambda t-- sorry, lambda 2t prime minus-- and I'm going to combine these two exponents right here. So we'll have minus lambda 1 and 10e to the-- let's see, is it lambda 2 minus lambda 1t equals zero. Just going to take this term to the other side of the equals sign, so I'll just do that, integrate both sides. And we get N2 e to the lambda 2t equals-- let's see, that'll be lambda 1 N10 over lambda 2 minus lambda 1 times all that stuff. I'm going to divide each side of the equation by-- I'll use a different color for that intermediate step-- e to the lambda 2t. And that cancels these out. That cancels these out. And I forgot that integrating factor again, didn't I? Yeah, so there's going to be a plus C somewhere here. And we're just going to absorb this e to the lambda 2t into this integrating constant because it's an integrating constant. We haven't defined it yet. Did someone have a question I thought I saw? OK, and so now this is where I went wrong last time because I think I was exhausted and commuted in from Columbus. I just assumed right away that C equals zero, but it's not the case. So if we plug-in the condition at t equals 0-- and two should equal 0-- let's see what we get. That would become a zero. That t would be a zero, which means that we just end up with the equation lambda 1 N10 over lambda 2 minus lambda 1 plus c equals zero so obviously the integration constant is not like we thought it was. So then C equals negative that stuff. That make more sense. So you guys see why the integrating constants not zero. So in the end-- I'm going t5o skip ahead a little of the math because I want to get into nuclear activation analysis-- we end up with N2 should equal, let's see, lambda 1 N10 over lambda 1 minus lambda 1 times-- did I write that twice? I think I did-- times e to the-- let's see, e to the minus lambda 1t minus e to the minus lambda 2t. And so since we know N1, we've found N10. We know N3 from this conservation equation. We've now fully determined what is the concentration of every isotope in this system for all time. And because the solution to this is not that intuitive-- like I can't picture what the function looks like in my head. I don't know about you guys. Anyone? No? OK, I can't either. I coded them up in this handy graphing calculator where you can play around with the eye of the concentration N10, which is just a multiplier for everything, and the relative half lives lambda 1 and lambda 2. And I'll share this with you guys, so you can actually see generally how this works. So let's start looking at a couple of cases-- move this a little over so we can see the axes. Let's say don't worry about anything before T equals zero. That's kind of an invalid part of the solution. So I'll just shrink us over there. And so I've coded up all three of these equations. There is the solution to N1 highlighted right there. That's as you'd expect simple exponential decay. All N1 knows is that it's decaying according to its own half life or exponential decay equation. And two, here in the blue, which expands, of course, looks a little more complicated. So what we notice here is that N2 is tied directly to the slope of N1. That should follow pretty intuitively from the differential equations because if you look at the slope of N2, well, it depends directly on the value of N1. For very, very short times, this is the sort of limiting behavior in the and the graphical guidance. I want to give you two solve questions like, what's on the exam or how to do a nuclear activation analysis. Is everyone comfortable with me hiding this board right here? OK. So let's say at time is approximately zero, we know that there's going to be N1 is going to equal about N10. What's the value of N2 going to be very, very close to 2 equals 0? AUDIENCE: Zero. MICHAEL SHORT: Zero. So N2 is going to equal 0. But what's the slope of N2 going to be? This is how we can get started solving these graphically without even knowing what the real forms are. So we've already said that at a very short time, N2 is approximately 0. So if that's zero, then that whole term is zero, which means that the slope of N2 is approximately lambda 1 N1, just the activity of N1. And hopefully, that follows intuitively because it says for really short times before you get any buildup, the slope of N1 determines the value of N2. So if we were to start graphing these-- let's just start looking at some limiting behavior-- that's t, and we're going to need some colors for this. Let's stick with the ones on the board. Oh, hey, awesome. Make N1 red, N2 blue, and N3 green. So let's start drawing some limiting behavior. So we know that N1 starts here at N10. And we know it's going to start decaying exponentially. So the slope here is just going to be minus lambda 1 and N1, which is going to be the negative slope of N2-- looks pretty similar, doesn't it? So we know N2 for very short times is going to start growing at the same rate that N1 is shrinking. So we already know what sort of direction these curves are starting to go in. How about N3? What's the value of N3 for very short times? Anyone call it out. Well, we've got kind of a solution right here. If we know that N2 is about zero for very short times, what would the value of n' three have to be? Also zero. And what about the slope of N3? Also, about zero. If there's no N2 built up, then there's nothing to create N3. So we know that our end three curve is going to start out pretty flat. Now how do we find some other limiting behavior? Let's now take the case-- let's see, I want to rewrite that a little closer here, so we have some room. So that's at t equals about zero. And at t equals infinity, what sort of limiting behavior do you think we'll have? What's the value of N1 going to be at infinite time? Zero-- it will have all decayed away, will equal zero at t equals infinity. How about N2? AUDIENCE: Zero. MICHAEL SHORT: Zero. How about N3? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: N10-- correct. Because of that conservation equation right here. So we know for limiting cases, N1 one is going to be 0. N2 is going to be 0. And N3, it's going to be N0. So we've now filled in all the four corners of the graph just intuitively without solving the differential equations. Now let's start to fill in some middle parts. What other sorts of things can we determine, like, for example, where N2 has a maximum? That shouldn't be too hard. So let's make another separation here. So what if we want to find out when does the N2 dt equal 0? What do we do there? Anyone have an idea? Using the equations we have up here. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah, well, we can just take this equation right here. We can figure that out in terms of N1. So if D and 2D equals zero, then we know that lambda 1 N1 is going to equal lambda 2 N2. What this says intuitively is that the rate of production of N2 by decaying on 1 equals the rate of destruction of N2 by its own decay. So at some point, the N2 is going to have to level off. When that point is depends on the relative differences between those half lives. So we already know if we were to just kind of fill in smoothly what's going to happen, N2 is probably going to follow something roughly looking like this. We already know the solution to N1. I think we can figure that out graphically. It's simply exponential decay. The only trick now is how does N3 shape up? What do you guys think? How would we go about graphically plotting these solutions without solving them? I don't think yet I've given you the full form. It's kind of ugly, and I doubt that if you looked at it you'd be able to tell me exactly what it would do. So this is just the mathematical expression of N10 minus N1 minus N2. So how do we figure out all the stuff about N3? Yeah. AUDIENCE: You could just draw a curve so that you get all three curves and always add it to the same number. MICHAEL SHORT: Yeah. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Absolutely, that's totally correct. Yeah, if you just take N10 minus N1 and N2, that gives you the value of N3. That's completely correct. So you could do that sort of one point at a time and say, well, maybe around there, maybe around there. It might take a little while, though. So I want to think what's another intuitive way? What would the value or the slope of N3 track-- what other variable in the system? Or in other words, how are they directly related? Yeah. AUDIENCE: The slope of n2 are they equal? MICHAEL SHORT: Yeah. The slope of n3 depends directly on the value of n2 and nothing else. So initially, you can see the value of n2 is almost 0, so the slope of n3 is almost 0. As the value of n2 picks up, so should the slope of n3 until we reach here. What happens at that point? AUDIENCE: N2 decreases. MICHAEL SHORT: Yep. The rate of production of n2 decreases because the val-- I'm sorry-- yeah, the rate of production of n3 decreases because the rate of production of n3 is just dependent directly on the value of n2. So the maximum slope of n3 has to be right there at which point it has to start leveling off and eventually reaching 0. You're going to see this kind of problem on the homework. You're going to see this kind of problem on the exam. I guarantee you. But it's not going to have this exact form. But what I'll want you to be able to do is follow this example. Let's say I pose you a small set of these first order differential equations. Can you use any method that you want-- intuitive, graphical, mathematical-- to predict what the values and slopes of these isotopes are going to be as a function of time? So in order to get nuclear activation analysis right you need to be able to do this. In nuclear activation analysis it's just one twist. I'm going to move this over to add the twist. You're also producing isotope n1 with a reaction rate. By some either isotope and not what you put in the reactor. So if you want to know what your impurities and naught were to undergo what's called nuclear activation analysis, then you can figure out, depending on which one you count, what they could be. So this right here. Let's look at the units of this versus the units of this. First of all, if we're adding them together they'd better be in the same units, right? So we already talked about the units of this decay equation. It's like number of decays per second. So this reaction right here better give us a number of atoms produced per second, or we're kind of messed up in the units. So anyone remember what is the units of-- I'll make a little extra piece right here-- what's the units of microscopic cross sections or barns? What is that in some sort of SI unit? AUDIENCE: Centimeter squared. MICHAEL SHORT: Yep. It's like a centimeter squared. And what about flux? This one you may not know, but it definitely depends on the number of neutrons or the number of particles that are there. AUDIENCE: Is it barns [INAUDIBLE] MICHAEL SHORT: Almost. So the flux describes how many particles pass through a surface in a given time. So we have how many particles per unit surface, per unit time. Ends up being neutrons per centimeter squared per second. Just like the flux of photons through a space or the flux of any particle through anywhere, it describes how many particles go through a space in a certain time. And then there's the number of particles that are there. If we're going with atoms, it's just atoms. These are all multiplied together. The centimeter squared cancel. And we end up with some sort of a atoms per second produced. We can put in a little hidden unit in the cross section. If there is a reaction going on where in goes a neutron and out goes an atom or something, that should cancel all things out. Let's not get into that now. The whole point is we have the same sort of unit going on here, which is some number of atoms produced per second. Same thing as number of atoms decayed per second. So it's the production-destruction equivalent of each other. So, in that way, we can have a reaction rate that we impose, something artificial, by sticking something in the reactor and controlling its power level. And then follow the decay process which is a natural radioactivity event. And this is one of the simplest governing equations for nuclear activation analysis. Now, one, I might give this to you on an exam and say OK now draw the curves for nuclear activation analysis. And maybe calculate what's the impurity level if you measure this many counts of something. Then you just work backward through the math. But I want to get you guys thinking conceptually right now. What are the real equations for nuclear activation analysis? Let's just do these in terms of n1, n2, n3. That's dt, d, and 3dt. We'll start with the stuff that's up there minus lambda 1 n1. Plus some cross section times the flux times some other atom n0. What other things are we missing? Are there any other methods of production or destruction of isotope n1 that we need to consider? Well, we've got isotope n1 in a reactor. It can decay, or it can absorb one of the neutrons nearby. So how do we write that term-- that destruction term? Yep. AUDIENCE: Flux times the absorption cross section times n0. MICHAEL SHORT: Yeah. So let's say that's the absorption of atom 1 times n what did you say? AUDIENCE: Naught. MICHAEL SHORT: And would it be n0 or would it be n1? If you want to know how quick is n1 being destroyed-- AUDIENCE: OK. MICHAEL SHORT: --By absorbing neutrons. So then let's call this absorption of n0. And is it a plus or a minus? If it's a destruction rate. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: It's a minus. Yep. So what's really going on here is you've got some precursor isotope, whatever impurity you want to measure n0, producing n1. And you're looking at n1's decay signature, like its activity, to determine how much was there. But you also have to account for the fact that isotope n1 can be burned in the reactor. So this is like producing. This is decaying. And this is we'll call it being burned. This isn't burned in the sense of creating of fuel-- creating energy by burning fuel-- but we will refer to this sort of in-- colloquially too burning-- because we're then absorbing neutrons by n1 and removing that from the available decay signature. How about n2? How do we modify our equation to account correctly for the production and destruction of n2? And by the way this is not in the book, so I don't expect you to know it off the top of your head. AUDIENCE: It's the same type of thing, flux. MICHAEL SHORT: Yep So let's first take every term that we have up there. We have lambda 1 n1 minus lambda 2 n2. And what else do we have to account for? Yep. AUDIENCE: N2 also being burnt. MICHAEL SHORT: That's right. So n2 is also being burned so we'll have a minus, a flux times the absorption cross section for n2 times the amount of n2. How about n3? We'll start with what we had there. Lambda 2 n2. And, just like before, we've got to account for the burning of n3. So then we'll have minus flux times the absorption cross section of 3 times n3. And these equations hold true only for the time that your material is in the reactor. What happens when you take the material out of the reactor? AUDIENCE: You go right back to zero readings. MICHAEL SHORT: You do. Yep. When you come out of the reactor all of the fluxes go to zero. And that's the end of that. Yeah. AUDIENCE: Why don't you account for the production of n2 and n3 in the burn rate? MICHAEL SHORT: Ah, so did I necessarily specify-- the question was why don't we account for the production of n2 and n3 by the burning. Right? AUDIENCE: Yeah. MICHAEL SHORT: Did we specify that absorbing a neutron is the way to make n2? AUDIENCE: No. MICHAEL SHORT: Oftentimes it's not. So if you burn n1 by absorbing a neutron, then you will make another isotope that has the same proton number and one more neutron. And it may decay by some other crazy way, or it may be stable. Who knows. But by decay-- this could be by beta, positron, alpha, spontaneous fission-- not gamma because then you wouldn't have a different isotope. But oftentimes you won't have-- the burning process won't produce the same isotopes as the decay. So the situation we looked at on Friday when we said let's escalate things. That was a purely hypothetical situation where isotope n2 could be burned to make n0. I'm not saying it can't happen, but it's not likely. But still we can model it. We can model anything. That just wasn't a realistic situation. This is. This is what you guys are going to have to look at to understand how much impurities there are in each of your materials. So this I would say is the complete description of nuclear activation analysis in the reactor. At which point you then have to account for what happens when you turn the reactor off. So what actually? What physically happens when you turn the reactor off? Yep. Oh Yeah. You've answered a lot. So Chris, yeah. CHRIS: Well, you try to-- you put your control rods all the way in and try to stop as many neutrons as you can to stop the chain reaction. MICHAEL SHORT: Yep. So normally to shut down the reactor you'd put the control rods in and shut down the reactor. Or the easier thing is just pull the rabbit out. Remember those little polyethylene tubes I showed you? This way we can keep the reactor on and remove your samples without changing anything. So it makes the reactor folks-- angry would be an understatement-- to constantly change the power level of the reactor. Reactors, especially power reactors and research reactors, they're kind of like Mack trucks. If they're moving they want to stay moving, and if they're not moving they don't want to be moving. And it takes an awful lot of effort to change that. It also happens to screw up experiments. If you are irradiating something like I was a couple months ago for 30 days, you want to have a constant flux so that your calculations are easy. You don't want 15 students to come in and turn the knobs all up and down, and then you have to account for that in your data. Which has happened. So you guys are going to be manipulating the reactor power when the experiments are out, and it's at low power. So you're won't be infuriating anyone else on campus like we did last year. So if you didn't account for that, but they still let us in. So they're bad. Whatever. [LAUGHING] Yeah. So after you either shut down the reactor or pull the rabbit out of the reactor then the production and destruction by neutrons is over but the decay keeps going. Which means if you wait too long, like for some of those short isotopes, if you wait more than a day or so, you'll have so little activity left that you won't be able to measure it. So what we're going to be doing is sticking your samples into the reactor for maybe an hour or so, pulling them out, and immediately running them over to the detector, so that we get the most signal per unit time. Because the things are going to be the hottest when they come right out of the reactor, and every second you lose from there you lose signal. Which means you have to account for longer to get the same amount of information with the same certainty. This is a nice segue way to what we'll be talking about Thursday which is statistics, certainty, and precision. How long do you have to count something to be confident within some interval that you've got the correct activity? For background counts-- who here is made a NSC Geiger counter? Hopefully almost all of you. Maybe you guys remember how long you had to count to be 95% sure that your background rate was accurate. It ends up being about 67 minutes or over an hour, and the reason is because the count rate is very low. So I'll do a little flash forward to Thursday since we're talking about it. When you count something with a very low count rate, you have to account for longer to be as confident that your number is correct. So let's say you want to be 95% confident or within plus or minus 2 standard deviations or 2 sigma. You have to count for longer and longer. For something that's really radioactive you can be sure, or 95% sure, that the count rate you measured is accurate for a shorter counting time. So everything in this class seems to come up in trade-offs. Right? You trade off stability for a half-life. You trade off decay constant for half-life. You trade off binding energy for excess mass. You trade off counting time and precision. You trade off exposure and dose. which you're going to get into later. We'll see if anyone wants to use a cell phone or eat irradiated food afterwards. And I do all the time, so that should tell you the answer. So in the last seven minutes or so, I want to walk you through playing around with what happens when you change the values of lambda 1 and lambda 2? So what do they look like when the half-lives are roughly equal and when one is much larger than the other one? So let's set them to be about equal. These are just unitless. So let's set them equal to one. I think the system explodes when we set them exactly equal because that term right there. So let's say that's 1.001. It's about as close as we can get, and let's confirm that we get the same sort of behavior. So isotope n1 just follows exponential decay. There's nothing that changes that. Isotope 2, its slope tracks the value of isotope 1 for a little while until you build up enough n2 that it starts to decay. You can find when that point is when lambda 1 n1 equals lambda 2 n2. There's one little step that we didn't fill in if you want to find the value of n2. So then you can just rearrange this a little bit, and you'll say n2 would have to equal lambda 1 over lambda 2 times n1. Which is n1 0 e to the minus lambda 1 t. So if you want to find that point right there in time, you can solve this. Then let's look at n3. So n3 when n2 is almost 0, n3 slope is almost 0. It's a little hard to see because-- I'll tell you what-- let's make all the half lives longer which kind of expands the graph. Wrong way. Let's make it-- Ah, we'll just move that decimal point 0.1. There we go. That's like expanding the graph. Right? So when n2 is almost 0, the slope of n3 is almost 0. And when n2 reaches a maximum, so does the slope of n3. Just like we predicted using our graphical method right here. And then over longer times-- let's put the half lives back to the way they were. Over long times n3 trends to n1 0. Don't let that little piece fool you. Again t equals less than 0 is not a valid time for this, so we're not accounting for that. And n3 tracks right to here to the value of n1 0, and n2 and n3 turn to 0. So for this case where you have the half-lives roughly equal to each other, you can expect a pretty big bump in n2. What's going to happen when lambda 1 is extraordinarily big meaning the half-life of n1 is extraordinarily short. What do you guys think will happen? Not mathematically but physically. If n1 just kind of goes ba-boom and instantly decays away. AUDIENCE: There would be a lot of n2-- right then. MICHAEL SHORT: Yep. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Your n1 is just going to turn into n2 right away, and n2 is going to take its sweet time decaying to n3. So let's see what that looks like. If lambda 2 is much bigger than lambda 1, let's make the maxima a little different. Change our slider value a bit. So if l2 is big and l1 is small, well, let me change the actual axes to make this a little easier to see. There we go. You can see that much, much more quickly than we have it in this graph right here l1 just decays away right away. L-- I'm sorry-- n1 decays right away. n2 builds up to a much higher relative value because it's produced faster than it's destroyed for short amounts of time. So you can end up with a great spike in n2 which slowly decays away to n3. How about the opposite effect? What if l1 lambda 1 is really, really small indicating a very long half-life, and lambda 2 is really, really large indicating a small half-life. Yeah. AUDIENCE: It would basically go from n1 to n3. As soon as it goes to n2, it's going to decay to n3. MICHAEL SHORT: That's right. In this case you've got n2 as soon as its created self-destructs. So let's see what that looks like. So we can just slide l2 to be big, slide l1 to be small, and you can actually graphically see n2 just shrink towards the x-axis. And it's almost like you only have two equations. It's like you just have n1 and n3 and n2 basically doesn't exist. Where the slope of-- where the slope of n3, except at extremely short times, just tracks the value of n1. And I know in the book they're called secular or transient equilbria. I'm not going to require that you memorize those terms. It's more important to me that I can give you a real physical situation. Say here's these three isotopes, for four isotopes, or six doesn't matter because we can solve these pretty quickly. Tell me what's going to happen based on the relative half-lives as long as they decay in a nice linear chain. I'm not going to give you something where n1 can beget n1 or n4 or n6. Because at that point you can construct the equations, but I don't expect you to be able to graphically solve them. And I may also throw you curveballs like nuclear activation analysis to see well what happens when you turn on or turn off a reactor. I've got an example of that too. We're right at this point here, I guess, t equals 50. Yeah. I've set it up such that you turn off the reactor and n3 is stable right there, but n1 and n2 continue to decay. So it's not hard to cad these-- to code these sorts of things up. I'll share the links with these equations so you guys can play with them yourselves. Add to them yourselves, and try just getting an intuitive feel for how series radioactive decay happens. So I want to know now that we spent a couple of days on it, would you guys be comfortable setting up sets of differential equations like this? Say yes, no, maybe? I see a lot of up and down shaking heads. That's a promising sign. If not I'm willing to spend a little more time on it on Thursday if folks would like a bit of review. And if you're afraid to tell me, just send me an email anonymous or not. Yeah? AUDIENCE: Do you think maybe Thursday we could do a like a real example? MICHAEL SHORT: Yeah. AUDIENCE: Of a like a series. MICHAEL SHORT: I think so. Yeah, with real example with numbers and everything. AUDIENCE: Yeah. MICHAEL SHORT: Sure. OK. Well, we can make one of those up for Thursday. Cool. And what about the graphical solution method since I don't know whether they teach that in the GIRs, but what I do want to be able to do is look at the limiting cases. In other words, fill in the four corners of the graph. At t equals 0, what are things actually doing? n1 is just decaying at its half-life. There is no n2 yet. So these slopes are equal and opposite. And there's no n3 yet, so that there is no slope of n3. So I would like you guys to try reproducing this. And I will-- again I'll provide pictures of these blackboards so you guys can see, but it would be very helpful for you guys to try to reproduce these graphs as we saw them. Then you can check them here on the graphical calculator, and then play around with the amount of n0, or-- I think I just broke it. Let's just call that one. There we go. And play around with sliders or values of n1, n2, or n3. That's an interesting solution. So since it's about four or five of, I want to open it up to any questions you guys may have. Yeah. AUDIENCE: I have a question from the [INAUDIBLE] First time do this. Did you have-- do you know what integrated video because like there are endless possibilities if you just [INAUDIBLE] add up to the right mass number? MICHAEL SHORT: Oh yeah. The questions for a spontaneous fission. What fission products do you choose? I'll say they're all good. As long as you pick something with roughly equalish masses, so you don't pick like it fizzes into hydrogen and something quite smaller, which should be better known as just proton emission. You're going to get roughly the same answer. Yep. AUDIENCE: Would that be something we just like set up. Like the top number is 80. We just look at whatever 40, 42, 38 arms pick a number. MICHAEL SHORT: Roll a D 80. You'll get basically the same result. Roll an 80 sided dice. Hopefully at MIT you could find one, or write a program to make random number between about 10 and 80 or 10 and 70. Let's go to the actual problem set to see what you guys mean. I want to make sure I'm answering the correct question. Problem statement Yep. Yep. So allowable this is for this one allowable nuclear reactions. Yep. So for a spontaneous fission just pick one you think would be likely. You can also look up what sort of isotopes are created when elements fizz. It's not straight down the middle, so like uranium won't often split into two equally sized fission products. They'll have roughly different masses, but which ones you pick? You're still going to get the same general solution. Yep. AUDIENCE: I'm so confused on how to find one. Like a situation where it is unlikely possible because is it spontaneous fission or is it generally possible for heavier elements like transuranic elements? MICHAEL SHORT: So the question is when is spontaneous fission possible? Is it only for heavy elements? There is a difference between energetically possible and observed. That's part of the trick to this problem. If you do out the Q equation to find out for add fission products that you picked, you may be surprised at the result. However, you're right. You don't tend to see spontaneous fission happen until you get to really heavy things like uranium. So there's more to will something spontaneously fizz than does the Q value allow it to happen. So I don't want to give away anymore but I will say if you're surprised at your result, you might be right. Yep. AUDIENCE: On this question for electron capture. In the equation you gave us, you're calculating Q in an electronic capture. Its the massive parents minus the dollars minus the I think binding energy of the electrons is what you wanted. When I try find out what the binding energy of an electron is, it says it depends on the shell that its in. MICHAEL SHORT: Yep. AUDIENCE: So how do we know which electron the nucleus is after? Do we assume its from the innermost shell? MICHAEL SHORT: Yep. So the question was if you're doing electron capture where you have some parent nucleus and you've got a lot of electron shells. The binding energy of every electron is different. Which one goes in? One, you find the data on the nest tables. Two, chances are it will be the closest one. So a roughly 80% of the time these things happen from the K shell with very decreasing probabilities from the outer shells. So you can pick either the k or the L shell, like both things may happen. But I would say for simplicity's sake assume it's an inner most shell electron. And you can look up the binding energy on the NIST tables on the learning module site. So any other questions? Yeah, Luke. LUKE: On graphing the spectrum, the satellite intensity vs. the energy for 4 2. MICHAEL SHORT: For 4 2. Ah yes. So graphing the-- this would be like if you had an electron detector. Is that what I asked for? 4 2, write the full nuclear reactions and draw the energy spectrum you expect from each released form of radiation including secondary ejections of particles or photons. So by a spectrum, I mean, yep, energy versus intensity. LUKE: OK. MICHAEL SHORT: So there you'll have to account for the spectrum like the various range of the betas that can be released, any ejected electrons, any Auger electrons, any photons from X-ray emission from electrons falling down and energy levels. Yeah, Alex, you had a question? ALEX: Yeah. What are the Auger electrons? MICHAEL SHORT: The Auger electrons is that funny case where in our mental model a gamma ray hits an inner shell electron, and it's usually an inner shell electron, shooting it out. Then another electron will fall down to fill that hole emitting an X-ray. And the Auger process can be thought of that second X-ray hits an electron on the way out and fires out the electron. So this here would be the Auger electron. They tend to be particularly low energy. Yeah, Luke. LUKE: If you have that cascade of electrons during an electron capture, are they still Auger [INAUDIBLE] radiation? MICHAEL SHORT: Yep. As long as you have a higher level shell coming down to a lower shell and the ejection of an outer shell electron. That's than Auger electron emission process. Regardless of whether it started with gamma or started with electron capture. Yep. AUDIENCE: How do we know if that happens? MICHAEL SHORT: You can actually sense or detect the energy of those Auger electrons with a very sensitive Auger electron detector. These are in the sort of hundreds or thousands of EV range. AUDIENCE: But for like the context of this question, how do we know-- where would we go to find them? MICHAEL SHORT: Oh, to get the Auger electron data? AUDIENCE: Yeah. MICHAEL SHORT: For that you can actually look up the binding energies of an outer shell electron, and you can do that energy balance where it would be E2 minus E1. Whatever the energy of that X-ray is minus the binding energy of the emitted electron. And because there's infinite possibilities. I mean you could eject any electron, just pick one. And say here's an Auger electron, or draw a couple of lines in places. I don't have I don't want you to get every single line. If we asked you to do this for uranium, there's like you know 92 electrons and a lot of different transitions that's not what we're going for. I want to make sure you know the physics. Not that you can draw 92 lines accurately with a fine-toothed pencil. Do you need a question two? AUDIENCE: Yeah. So for 2 1 if we write two possible nuclear reactions for 239 on [INAUDIBLE] the right was only off the case of any decision and that it's a state which decay processes and repeating processes may be possible for each general type of reaction. What exactly does that-- I find the answer for number two is alpha. MICHAEL SHORT: Maybe that's the answer. AUDIENCE: Oh, OK. MICHAEL SHORT: And does anything compete with alpha decay? Does anything compete with spontaneous fission? OK. Cool. [INTERPOSING VOICES] AUDIENCE: Is that an indication of [INAUDIBLE] MICHAEL SHORT: Beta decay. Well, is there-- well, for that you can look up the table of nuclides which I've got up here. So let's take a look at plutonium 239, and it precedes by alpha or spontaneous fission. So every year I switch up the isotope and make sure that there's at least a couple of decay modes, and therefore the answers are going to change every year. But the general question doesn't, So this year I happened to pick an interesting one. Yeah, it's kind of a mind game, right? What are you missing? AUDIENCE: Nothing. MICHAEL SHORT: Nothing. Yeah. [LAUGHING] Yeah, go with your physical intuition. Any other questions, and maybe time for one more. AUDIENCE: For 3 2, I could only find one nuclear reaction. The action played after the nuclear reaction. MICHAEL SHORT: Ah Yeah, AUDIENCE: That's very curious you're really [INAUDIBLE] decay. MICHAEL SHORT: Yeah. AUDIENCE: And, so I was wondering where the molybdenum [INAUDIBLE] MICHAEL SHORT: Yeah, so the question is I specifically wrote which nuclear reactions could make 99 molybdenum, despite there being only one natural one. So what could you induce artificially? And if you can do that profitably, I'll guarantee you there's a startup in it for you. So what are all the different particles that something could absorb to create molybdenum 99, and which of those are allowable nuclear reactions? And if they're not allowable, how much energy do you have to put in an accelerator to make that reaction happen? And is the price of electricity in the accelerator worth the Molly 99 that you create? There are actually quite a few startups working on this problem right now. So the answer to this question is be creative. Think about all the different particles you know of, and how they could create Molly 99. And figure out are any of those processes allowed, and if they're not allowed, how energetic do you have to make the incoming particles to allow them? Ah, good question. There's some creativity hiding in these problems. So it's 10:02. I want to cut it off here, and we'll start off Thursday with a numerical example of this stuff. Nuclear activation analysis. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 34_Radiation_Hormesis.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: All right, guys. So today I'm not going to be doing most of the talking. You actually are, because, like I've said, we've been teaching you all sorts of crazy physics and radiation biology. We've taught you how to smell bullshit, taught you a little bit about how to read papers and what to look for. And we're going to spend the second half of today's class actually doing that. Well, we're going to have a mini debate on whether or not hormesis is real. And you guys are going to spend some time finding evidence for or against it. Instead of just me telling you this is what hormesis is or isn't. So just to finish up the multicellular effects from last time, we started talking about what's called the bystander effect, which says, if a cell is irradiated, and it dies or something happens to it, the other cells nearby notice. And they speed up their metabolism, their oxidative metabolism, which can generate some of the same chemical byproducts as radiolysis does, causing additional cell damage and mutation. And there was an interesting-- yeah, I think I left-- we left off here at this study, where they actually talked about most of the types of mutations found in the bystander cells were of different types. But there were mutations found, in this case, as a result of what's called oxidative-based damage. This is oxidative cell metabolism ramping up and producing more of those metabolic byproducts that can damage DNA as well. What we didn't get into is the statistics. What do the statistics look like for large sample sizes of people who have been exposed to small amounts of radiation? I'm going to show you a couple of them. One of them is the folks within 3 kilometers of the Hiroshima. So I want you to notice a couple of things. Here is the dose in gray, maxing out at about two gray. And in this case this ERR is what's called Excess Relative Risk. It's a little different than odds ratio, where here an excess relative risk of 0 means it's like nothing happened. So anything above 0 means extra excess relative risk. So what are some of the features you notice about this data? What's rather striking about it in your opinion? Yeah? Charlie? AUDIENCE: [INAUDIBLE] so in the [INAUDIBLE] timeline from [INAUDIBLE] timeline here. MICHAEL SHORT: This one? AUDIENCE: Yeah. MICHAEL SHORT: Oh, yeah, these are the errors. Yep. What does it say here? Is it-- more than one standard error Yeah. AUDIENCE: There's a lot of variability? MICHAEL SHORT: Yeah, I mean, look at the confidence in this data at high doses. And then while you may say, OK, the amount of relative risk per amount of radiation increases with decreasing dose, which is the opposite of what you might think, our confidence in that number goes out the window. Now what do you think of the total number of people that led to each of these data points? How many folks do you think were exposed to gray versus milligray of radiation? AUDIENCE: A lot less for gray than [INAUDIBLE].. MICHAEL SHORT: That's right, the sample size. I thought it was cold and loud in here. The sample size for the folks in gray is much smaller. And yet the error bars are much smaller too. That's not usually the way it goes, is it? Usually, you think larger sample size, smaller error bars, unless the effects themselves and confounding variables are hard to tease out from each other. If you then look at another set of people, all of the survivor-- oh. yeah, Charlie? AUDIENCE: How did they determine the-- the doses [INAUDIBLE]?? MICHAEL SHORT: This would have to be from some estimate. This would be from models. It's not like folks had dosimeters everywhere in Japan in the 1940s. But this-- these would be estimates depending on where you lived, let's say in an urban, suburban, or rural area, let's see, things like milk intake right after the bomb, or anything that would have given you an unusually high amount of radiation, distance where the winds were going. This is the best you could do with that data. And now look at all of the bomb survivors, including the ones outside 3 kilometer region, but still got some dose. What's changed? AUDIENCE: It seems like they're less likely to get more risk for less dose. MICHAEL SHORT: Yeah, the conclusion is almost flipped for the low dose cases. If you put them side by side, depending on the folks living within 3 kilometers of the epicenter of Hiroshima versus anyone exposed, all the bomb survivors, you get an almost opposite conclusion for low doses, despite the numbers being almost, you know, within each others confidence intervals for high doses. So what this tells us is that the effects of high dose are relatively easy to understand and quite obvious even with low sample sizes. What is different between these two data sets? Well, it's the only difference that's actually listed here. Distance from the epicenter, right? So before I tell you what's different, I want you guys to try to think about what could be different about the folks living within 3 kilometers of the epicenter of Hiroshima versus anyone else in the city or the countryside? Yeah? AUDIENCE: Would it be like [INAUDIBLE]?? It seems like a the closer, like, it would be a lot more instances where you get a higher dose. So they're underestimating [INAUDIBLE].. MICHAEL SHORT: Could be, yeah. It might be harder to figure out exactly how much dose folks had without necessarily measuring it, right? But what other major factors or confounding variables are confusing the data here? Yeah? AUDIENCE: Wouldn't a lot of people who lived closer, like, not inside the radiation, like, the actual shockwave and heat from the bomb [INAUDIBLE]?? MICHAEL SHORT: So in this case, these are for bomb survivors. So, yes, that's true. If you're closer, you get the gamma blast. You get the pressure wave. AUDIENCE: But like, even if you survive that, it still like would affect them in addition to radiation. Is it counting for people who got injured from that too? MICHAEL SHORT: It should just account all survivors, yeah. AUDIENCE: So if they were injured, that could change how they reacted to the radiation exposure. MICHAEL SHORT: Sure. Absolutely. And then the other big one is, actually, someone's kind of mentioned it, but in passing, urban or rural. The environment that you live in depends on how quickly, let's say, the ecosystem replenishes or not if you live in a city or what sort of other toxins or concentrated sources of radiation you may be exposed to by living in a city that's endured a nuclear attack or something else. It could also depend on the amount of health care that you're able to receive. If you show some symptoms of something, if you live way out in the countryside, and there weren't a lot of roads, then maybe you can't get to the best hospital, or you go to a clinic that we don't know as much. The point is, there's a lot of confounding variables. There's a lot more people. But anything from like lifestyle, to diet, to relative exposure, think about the differences in how folks in the city and out in the countryside may have been exposed to the same dose, because, again, dose is given in gray, not in sieverts. That's the best we can estimate. But would it matter if you were to exposed to let's say, alpha-particle containing fallout that you would then ingest versus exposed to a lot of gamma rays or delayed betas. It absolutely would. So the type of radiation and the route of exposure in the organs that were affected are not accounted for in the study because, again, the data is in gray. It's just an estimated joules per kilogram of radiation exposure, not taking into account the quality factors for tissue, the quality factors for type of radiation, the relative exposure, the dose rate, which we've already talked about. How much you got as a function of time actually does matter. So all these things are quite important. And for all these sorts of studies, you have to consider the statistics. So let's now look at a-- I won't say, OK, a cellphone-like study where one might draw a conclusion if the error bars weren't drawn. So based on this, can you say that very low doses of radiation in this area actually give you some increased risk of, what do they say, female breast cancer? No. You can't be bold enough to draw a conclusion from the very low dose region from, let's say, the-- the 1s to 10s of milligray, that whole region right there that people are afraid of getting, we don't actually know if it hurts or it has nothing, or if it helps. That's a kind of weird thing to think about. So the question is, what do we do next? These are the actual recommendations from the ICRP. And I've highlighted the parts that are important, in my opinion, for everyone to read. And the most important one, probably we'll have to come to terms with some uncertainty in the amount of damage that little amounts of dose do. So this is the ICRP saying to the general public, you guys should chill out. There's not much we can do about tiny amounts of exposure. They happen all the time. You can either worry about it, and get your heart rate up, and elevate your own blood pressure, and have a higher chance of dying on your own, or you can just chill out because there is not enough evidence to say whether a tiny little amount of radiation, and we're talking in the milligray or below, helps, or hurts, or does nothing, which leads me into the last set of slides for this entire course, they're not that long because I want you guys to actually do a lot of the work here, is radiation hormesis, real or not? There are plenty of studies pointing one way or the other. And I want to show you a few of them with some other examples. The whole idea here is that a little bit of a bad thing can be a good thing, much like vitamins, or, let's say, vitamin A in seal livers, a little bit of it you need. It's a vital micronutrient. A whole lot of it can do a whole lot of damage. You don't usually think of that being the case for radiation. But some studies may have you believe otherwise with surprisingly high sample sizes. So the idea here is that if you've got anything, not just element and diet, but anything that happens to you, there's going to be some optimum level where you could die or have some ill effects if exposed to too much or too little. We all know that this happens with high amounts of radiation. The question is, is that actually happened? So let's look at some of the data. In this case, I mentioned selenium and actually have a fair bit of this data that shows some, let's say, contradictory results in this case, where a whole lots of different people were exposed to a certain amount of selenium accidentally. I don't think these were any intentional studies. But some folks received massive doses of selenium and tried-- folks tried to figure out, well, what how-- oh, yeah, if you want to see how much they got. Remember that you want about 5 micrograms per day on average. That's a pretty crazy amount of selenium that ended up killing this person in four hours. But let's look at a sort of medium dose, something way higher than you would normally get. Two different studies published in peer-reviewed places-- this one says, "taking mega doses of selenium," so enormous doses, "may have acute toxic effects and showed no decreased incidence of prostate cancer and increased prostate cancer rates. 35,000 people. The same supplements greatly reduced secondary prostate cancer evolution in another study." Kind of hard to wrap your head around that, right? Both these studies were done with, I'd say, enough people and came to absolutely opposite conclusions, showing that there's definitely other confounding variables at work here. So there's kind of two solutions to this problem, increase your sample size to try to get a most representative set of the population or control for other confounding variables. And then the question is, how do you model how much is a good thing to go over what these models mean. The one that's described right now in the public is called the linear-no threshold model. This means that if this axis right here is bad and this is axis right here is amount that any amount of radiation is bad for you. What I think might be a little bit more accurate is called the linear threshold model. If you remember from two classes ago, the ICRP recommends that, I think, 0.01 microsieverts is considered nothing officially. That would mean there is a threshold below which we absolutely don't care. And if there are any ill effects, they're statistically inseparable from anything else that would happen. And that would suggest here this linear threshold model, where this control line right here would be the incidence of whatever bad happens in the control population not exposed to the radiation, the selenium, the whatever. There's also a couple of other ones like the hormesis model, which says that if you get no radiation, you get the same amount of ill effects as the control group. If you get a little radiation, you actually get less ill effects. In this case, this would be like saying getting a little bit of radiation to the lungs could decrease your incidence of lung cancer. Does anyone believe that idea? Getting a little bit dose to your lungs could decrease lung cancer? OK. And then you reach some point of crossover point where, yeah, a lot of this thing becomes bad. And the question is, is radiation hormetic? Does this region where things get better actually lead all the way to x equals 0 as a function of dose? And I want to skip ahead a little bit to some of the studies. No, I don't want to skip ahead. There are some non hormetic models that have been proposed in the literature. It's easy to wrap your head around a linear model, right? It's just a line. More is worse. But the question is, how much? So folks have proposed things like linear quadratic, where a little bit of dose is bad. And then a lot more dose is more bad as a function of dose. That's actually kind of what we saw in the Hiroshima data. And I'll show you again in a sec. So the history of this LNT, or Linear No-Threshold model, states the following four things-- radiation exposure is harmful. Well, does anyone disagree with that statement? I think we all know that even large-- you know, at least large amounts of radiation exposure is bad. It's harmful at all exposure levels. That's the one you have to wonder. Each increment of exposure adds the overall risk, saying that it's an always increasing function. And the rate of accumulation exposure has no bearing on risk. The first one's easy. We know this is true because you expose people to a lot of radiation, bad things tend to happen, deterministically. The second one, we already know is false. If you look at large sample sets of data, like, the data we showed before, there's definitely a non-linear sort of relationship going, where each incremental amount of exposure has the same amount of incremental risk. We know from a lot of studies that's not typically true. Then the question is, what about these two? So now it's going to-- we're going to find and who you some fairly interesting studies. In this case, leukemia as a function of radiation dose, what do you guys think about this data set before I seed any ideas into your heads? So here is dose and sieverts, not gray. And here is odds ratio, relative risk of contracting leukemia. If you were to look at the data points alone, what would you say? AUDIENCE: A little bit of dose is good for you. MICHAEL SHORT: Yeah, you might think that. But look at all the different types of models you can draw through the error bars. As you could draw anything going, let's say, down and then up. You could draw a linear no-threshold model, as long as it got through this line right here or a linear quadratic model. So a study like this doesn't quite give you any sort of measurable conclusion. A study like this might, especially considering the number of people involved. In this case, this is the activity of radon in air as related to the incidence of lung cancer per 10,000 people. Notice the sample size here, 200,000 people from 1,600 counties that comprise 90% of the population. Chances are you've then passed the urban-rural divide. You've then passed any region of the country. So by including such a gigantic sample size, you do mostly eliminate the confounding variables. So, location, you know, house construction, urban versus rural, age, anything else are pretty much smeared out in the enormous sample size. And what do you see here? AUDIENCE: Looks pretty good for low dose. MICHAEL SHORT: Yeah, you see a fairly statistically-significant hormesis effect, where, you know, the route of exposure is very well-known. Everything else seems to be controlled for by-- I mean, we've included something like almost 0.1% of the US population. That's not bad. Other ones for people that get more specific, targeted dose, in this case, women who received multiple x-rays to monitor lung collapse during tuberculosis treatment, a group of people that can be tightly controlled and followed very well. These are numbers with one standard deviation. And that, right there so you can see, is centigray. So this dose right here is one gray worth of dose. That's a pretty toasty amount of radiation. But below that, again, statistically significant-looking data. I don't know how many people were in the study because I didn't extract that information. But it's something you might be doing in the next half an hour. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Oh, it does. It says deaths per 10,000 women. But how many people were in the study? The question is, what is your sample size? So like in the last study, it was just 200,000 people in the samples. That gives you some pretty good confidence that you've eliminated confounding results. So I don't know how many folks get tuberculosis these days in the US, or whether this was even a US study, chances are the sample size is smaller. So than even if the data support your idea of hormesis, you have to call into question, is this a large enough, and a representative enough, sample size to draw any real conclusion? So then let's keep going. More data needed. Evidence for a threshold model. This is probably the most boring-looking graph that actually gives you some idea of, should there be a threshold for how much radiation is a bad thing? In this case, it's very careful data. It's a very carefully-controlled data set, lung cancer death from radon in miners. And folks that are going down underground probably have a higher incidence of lung cancer overall from all the horrible stuff they're exposed to, whether it's coal or, you know, if you're mining gypsum. Oh, there's lots of nasty stuff down there. But there is an additional amount of deaths responsible from radon. Here's your relative list risk level of 1 and up to 10 picocuries per liter, which was around the maximum of the last study. It's as boring as it gets, which helps refute the idea of a linear no-threshold model, because if there was a linear no-threshold model, this dose versus risk would be reliably and significantly going up. So there's data out there to support this. And even-- even better ones, lung cancer deaths from radon in homes. The study was careful to look at. If you look at the legend here, these are different cities ranging from Shenyang in China, to Winnipeg in Canada, to New Jersey, which is apparently a city, to places in Finland, Sweden, and Stockholm, which are somehow different places. Yeah. So when you see a study like this where they actually control and check to make sure they're not getting any single locality as an unrepresentative measurement, and the data just looked like a crowd-- a cloud along relative risk equals 1, this either refutes the idea that there is no threshold or supports the idea that there's got to be some threshold lying beyond 10 picocuries per liter. So, again, to me, it supports the ICRP's recommendation of chill out. You're going to have a little bit of radon in your basement. But pretty big studies, and quite a lot of them, show that a little bit isn't going to add any risk to you. So if you're worried about risk, they're statistically is none based on quite a few of these studies. And in order to enable you to find these studies on your own, I wanted to go through five minutes of where to look. And the answer is not Google because Google is not very good at finding every study. It also picks up a whole lot of garbage that's not peer reviewed because it just scrabbles the internet, you know? That's what it does really well. Instead, I want us to take the next half hour, split into teams for and against hormesis, and try and find studies that confirm or refute the idea that radiation hormesis is an actual effect. So how many of you have some sort of computer device with you here? Good. Enough so that there is equal amount in each group. I'd like to switch now to my own browser. And I want to show you guys the Web of Science. Web of-- yeah, [INAUDIBLE] I use Pine on my phone. It's much better science. So if you just Google search Web of Science, and you're at MIT, it will recognize your certificates and send you into the actual best scientific paper indexing thing out there. AUDIENCE: Better than Google Scholar? MICHAEL SHORT: Oh, my god, it's better than Google Scholar. Yeah. If you think you've found everything by looking at Google Scholar, you're only fooling yourself. You're not fooling anybody else. It's getting better. But it doesn't find anything. And Google Scholar is really good at finding things that aren't peer reviewed, self-published stuff, things on archive, things that you can't trust because they haven't passed the muster of the scientific community. So instead, let's say you would just do a simple search for radiation hormesis. You can all do this. Don't worry. I'm not showing you how to search. I'm showing you some of the other features of Web of Science. And you end up with 534 papers. You can, let's say, sort by number of times cited, which may or may not be a factor in how trustworthy the data is. It might just correlate with the age of the paper. It might also be controversial. So if people cite it as an example of what to do wrong, it might be highly cited. You know, people have made tenure cases and like careers on papers that ended up being wrong. And all you see is 10,000 citations saying this person is an idiot. If the committee val-- you know, judging you for a promotion doesn't read that far into it, they're like, oh, my god, 10,000 citations, right? Boom! Tenure, that's all you have to do. I think I have it a little tougher. The important part is while with a title like that, oh, man, the more-- the real fun part though is you can see who has cited this paper. So if you want to then go see, why has this paper been cited 260 times, you can instantly see all the titles, and years, and number of additional citations of the papers that have cited it. So this is how you get started with a real research, research. Yeah, that's what I meant to say, is starting from a paper and a tool like Web of Science, you can go forward and backward in citation time, backward in time to see what evidence this paper used to make their claims, forward in time to see what other people thought about it. So who wants to be for hormesis? All right, everyone, all you guys on one side of the room, all you guys, other guys on the other side of the room. And I'd like you guys to try to find the most convincing studies that you can to prove the other side wrong. I suggest using Web Science, not Google Scholar. It's pretty easy to figure out how to learn how to use. And let's see what conclusion we come to. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yep, hormesis by the wall-- yeah, anti-hormesis by the window. There we go. And I'm going to hide this because I don't want to give anyone an unfair advantage. AUDIENCE: So [INAUDIBLE]. SARAH: So this is a graph showing the immune response in the cells of mice showing that after they were given doses from 0 to 2 gray, or 0 to 7 on the right, the response of the immune system. So at the lower doses below like 0.5 gray, which is in the range that we're looking at, well, the immune system in the mice had a stronger response at low doses of radiation and then very quickly tapered off, supporting the claim the low doses are good for mice. [LAUGHTER] MICHAEL SHORT: [INAUDIBLE] SARAH: I have another graph too. MICHAEL SHORT: So this percentage change in response, I'm assuming 100 years is no dose. OK. SARAH: Yes. So at higher doses, the response of the immune system was suppressed, which follows with what all the other studies show about giving doses in excess of like 1 gray to cells. MICHAEL SHORT: Cool. So anti-hormesis group. SARAH: Oh, I have another graph, but-- MICHAEL SHORT: Oh, you do? SARAH: Yeah. MICHAEL SHORT: Oh, I wasn't going to call them out. I was going to have them criticize what's up here. SARAH: Oh, no. I have another graph. MICHAEL SHORT: [INAUDIBLE] next. SARAH: I have two of the same ones. No, I have another one somewhere. I'll find it in a sec. This one. All right, so this one is incidences of lung cancer based on mean radon level and corrected for smoking. So you can't say that it was just from people smoking. So for radon levels up to 7 picocuries per liter, the incidence of fatal lung cancer actually decreased as you had more radon. MICHAEL SHORT: Oh. AUDIENCE: SARAH: Yes. MICHAEL SHORT: Anything else you guys want to show before we let the anti-hormesis folks poke at it? SARAH: That's what I got. MICHAEL SHORT: OK. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: What are your thoughts? AUDIENCE: OK, could you go back to the last one. SARAH: I will try, yes. AUDIENCE: Do you have any other [INAUDIBLE].. AUDIENCE: [INAUDIBLE] response. AUDIENCE: So-- so a mouse is twice-- almost twice as effective at fending off disease? OK, I-- I am not a mouse biologist, but the smell test makes me think that-- that perplexed me. And I guess you didn't do studies [INAUDIBLE].. SARAH: I am not personally offended by this. So you're good. AUDIENCE: Enormous-- enormous change. And if radiation hormesis has such a strong effect on these mice, then why isn't it something everywhere as a thing now. Like, if radiation-- if hormesis is responsible for 80% [? movement ?] in mice, [INAUDIBLE] like where-- SARAH: I don't know that it was improvement. I think it was just in the amount of response they saw. I don't know if that means it's-- well, that doesn't always mean it was effective at doing something. Right. MICHAEL SHORT: [INAUDIBLE] you guys have comments too? AUDIENCE: Additionally, that's like an extremely small of a dose for such a massive response in like a field that is so based on probability. Like, how can something like the dose range that small have that much of an impact on mice? SARAH: Well, from 0 to half a gray is pretty significant. AUDIENCE: But [INAUDIBLE] SARAH: [INAUDIBLE] AUDIENCE: --before you get to the 0.6 gray. AUDIENCE: You're also only looking at the cells from [INAUDIBLE] it seems like. And it like looked varied depending on the kind of tissue. So you can't do it for overall. MICHAEL SHORT: OK, I want to hear from the pro-hormesis team. What makes your-- what makes your legs a little shaky trying to stand and hold this up? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Aha. SARAH: Didn't read the study. [LAUGHTER] MICHAEL SHORT: I like this-- I like this idea that, yeah, you're only looking at one type of cell, which may or may respond differently to different types of radiation. There are no error bars. SARAH: No, not even a whole mouse either. AUDIENCE: [INAUDIBLE] in the mouse. MICHAEL SHORT: Oh, oh to trigger an immune response. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: It's like-- there are-- there's other cells nearby. But they're like, oh, you're not my cell. I'm going to [INAUDIBLE]. AUDIENCE: [INAUDIBLE] mice. MICHAEL SHORT: Yeah. So that's-- that's a valid point. But, yeah, did it say in the study how many? SARAH: Again, did not read the study. [LAUGHTER] Read the conclusion. MICHAEL SHORT: The data alone, just taken it at face value, make it look like hormesis is a definite thing, Yeah, Kristin? AUDIENCE: I'm saying if there is [INAUDIBLE].. MICHAEL SHORT: Yeah. SARAH: True. Nine mice cell samples. MICHAEL SHORT: Let's go to the other study. SARAH: All right, the-- the lung one? MICHAEL SHORT: Yeah, it seems to be more controlled and more legit. SARAH: Yeah. This one has error bars. MICHAEL SHORT: Yeah, 1 has error bars, 2, corrected for smoking. So let's see what the caption says. Lung cancer fatality rates compared with mean radon levels in the US. SARAH: And for multiple counties because it talks about counties plural. So-- MICHAEL SHORT: So multiple counties helped control for single localities, or-- AUDIENCE: So the 0 level there is theoretical. So the data that you have down here, like, we don't know what actually happens [INAUDIBLE].. SARAH: Past what? AUDIENCE: Like-- like below 1, the mean radon levels because everyone is exposed to radon. SARAH: Well, it says average residential level of 1.7. So I think that means maybe some people have less, maybe some people have more. I don't know what the minimum radon level is. MICHAEL SHORT: It's not going to be 0. SARAH: It's not 0. MICHAEL SHORT: Yeah, no one gets 0 unless you live in a vacuum chamber. SARAH: I don't know what kind of scale that's on. AUDIENCE: Me too. MICHAEL SHORT: Yeah. Cool, yeah. So this-- this is fairly convincing. If the point here was to show there is the theory of linear no threshold, and here's what's an actual data with error bars shows. It does a pretty good job in saying, the theory is not right, in this case. Can you say that in all cases? It's hard to tell. In the first study you found that was on the cellular level. Maybe the multicellular level-- multicellular level, certainly not the organism level, like we said, how many mice. This is just parts of mice. Just-- SARAH: It could be the same mouse. MICHAEL SHORT: Some cells-- yeah. This one is definitely at the organism level. It's for-- for gross amounts of exposure, how many of them resulted in increased incidence of lung cancer? The answer is pretty much none. They all showed a statistically-significant decrease, which is pretty interesting. So thanks a lot. Sarah. And the whole team. Now one of you guys come up and find [INAUDIBLE].. SARAH: Carrying the team. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: So who wants to come up? Or does no one [INAUDIBLE]? SARAH: Let's throw down, right? Fixing to scrap. MICHAEL SHORT: OK, you can just pull it out. SARAH: OK, Are you sure? MICHAEL SHORT: Yeah. SARAH: OK. I don't want to break things. MICHAEL SHORT: No, pulling it out's fine. If you jam it in, you can bend the pins. And that's happened here before. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah, if you want to take a minute to send each other the links, go ahead. No, I like this, though, is you can-- you can find a graph that supports something. And you can cite it in a paper. And you can get that paper published. But looking more carefully at the data does sometimes call things into question. AUDIENCE: Just like [INAUDIBLE]. MICHAEL SHORT: Like, I think you guys found a good example of that mouse cell study that looks like it supports hormesis, but you can't say so for sure. Make sure no one's waiting for their room. No one's kicking us out. AUDIENCE: Have we got a paper that I found here but we can't open up on there. MICHAEL SHORT: Interesting. Can you send me the link? AUDIENCE: [INAUDIBLE] AUDIENCE: Wait, that wasn't an option. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah. I mean, we can continue this. There's-- we're not-- since we're not going to the reactor since that valve was broken, let's keep it up. AUDIENCE: Hey, [INAUDIBLE] workbook and [INAUDIBLE] put it in the log book. AUDIENCE: That's your fault. AUDIENCE: [INAUDIBLE] AUDIENCE: I wasn't even [INAUDIBLE].. AUDIENCE: [INAUDIBLE] Email us by name. AUDIENCE: [INAUDIBLE] AUDIENCE: It's not over yet. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah, actually, I like this. This will be a good-- quite a good use of recitation. I'll keep my email open in case folks want to send things to present. AUDIENCE: That's the whole title. GUEST SPEAKER: So one-- one of the main problems that we had with the hormesis effect was that all of the studies that we've seen seem to cover a large scope of like tissues, different effects, and all sorts of things, like, yeah, there's a lot of studies. There's a lot of trends. But, like, the things in particular that they're studying are all over the place. And a lot of the-- a lot of the research done, like these studies here, are not actually meant to study hormesis. It's kind of like recycled data that's used from some other study. And they're kind of like pulling from multiple sources, which increases the uncertainty. Then, additionally, we have conflicting epidemiological evidence of low dosages. So we're, in one instance, you may see a reduction in breast cancer mortality. You'll see excess thyroid cancer in children, other, which is-- MICHAEL SHORT: That's the same study that was just shown, the Cohen 1995 residential radon study. GUEST SPEAKER: Yeah. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: [INAUDIBLE] [LAUGHTER] GUEST SPEAKER: And so I think-- we're not-- I don't think we're trying to disqualify hormesis as, like, completely wrong. I think one of the biggest issues that we're taking with it is that it's a small effect, if anything. It's something that we really don't know about. It's hard to quantify. And it's, at the end of the day, really just not worth it, not worth looking into because of all of the variable-- variables that go into it. And the effects that, like, we just don't know about. We don't understand it. So, yeah, fire away. MICHAEL SHORT: That's a a great viewpoint, actually. Yeah, Monica? AUDIENCE: [INAUDIBLE] OK, so it says support for radiation hormesis [INAUDIBLE] cell in animal studies, OK? And then it cites an example. Can you tell me how that, like, you know, supports what you're saying? AUDIENCE: Can you just highlight the part? MICHAEL SHORT: Oh, right-- right up here. AUDIENCE: OK. GUEST SPEAKER: We haven't seen it in humans. AUDIENCE: Well, often, biological studies are done on rats because they have similar effects to humans. But it's a lifespan of, like, 1/10 a human's lifespan. So, biologically, that's accepted. GUEST SPEAKER: Medicine also is not accepted until it works on humans, not on animals. AUDIENCE: [INAUDIBLE] GUEST SPEAKER: So we can cure cancer in rats all day. But, like, if it doesn't work in like the human body, then it just-- we still don't use it, like, it needs to clear the hurdle of human usefulness before we actually use it. MICHAEL SHORT: Let's actually look at this paragraph. They relate to carcinogensis in different tissues and the dose-response relationships [INAUDIBLE].. AUDIENCE: So there's a line that says the evidence for hormesis in these studies is not compelling since the data may also be also be reasonably interpreted to support no radiogenic effect in the low dose range. MICHAEL SHORT: Oh, that's interesting. Now, how would one interpret-- because you showed the Cohen data. So how would one interpret that to mean no effect? I'm trying now determine in this-- are the claims of this paper that you've been [INAUDIBLE]?? And this brings up, actually, another point. They do agree that there's been hundreds of cell and animal studies. They cite three human studies. So since we have the time, you guys may want to look for more than three human studies, done at the time of this writing. It's not fair to take ones that were done afterwards. AUDIENCE: [INAUDIBLE] GUEST SPEAKER: What? Let's find out. AUDIENCE: After 2000. MICHAEL SHORT: It might say at the bottom of the first page. AUDIENCE: Oh, wait, in the-- in the [INAUDIBLE].. MICHAEL SHORT: 2000, yep. Yeah. So if you want to refute that point, you may want to find more human studies pre 2000. It wouldn't be fair to do otherwise. But, actually, I liked what you said. So what you're proposing-- if there's a mostly blank board, is that most people should adopt the model that looks something like this. This is the axis of how much bad or that 0. And this is dose in gray. And whether your model does this, or this, or this, it sounds to me like you are defining a-- like you're defining a kill zone. [INAUDIBLE] maybe the-- GUEST SPEAKER: Yes. MICHAEL SHORT: The point isn't whether or not hormesis exists. The effect may be so small that who cares. But the bigger discussion is how much is that, not is a little bit good. Is that what you're getting at? GUEST SPEAKER: Yeah, the like, maybe it does look like this. But the dip is small, really not that different from the linear threshold model, we noticed. MICHAEL SHORT: Oh, so in addition to being a basic science question, could the issue of hormesis almost be a sidetrack in getting proper radiation policy through? That's a point I hadn't heard made before, but I quite like it. Because it's not like you're going to recommend everyone smokes three cigarettes a day or, you know, everyone gets blasted by little bit of radiation once a year as part of a treatment. I don't think anyone would buy that. Even if it did help, I don't think anybody would emotionally buy that. But by focusing on-- you know, that-- there's a nice expression is the most important thing is make the most important thing the most important thing. It means don't lose sight of the overall goal, which is if you're making policy on how much radiation exposure you're allowed, do you focus on saying, a little bit is actually good, or do you focus on saying, here's the amount that's bad? And anything below that, we shouldn't be regulating or overregulating because there's no evidence to say whether it's good or bad outside the kill zone. I quite like that point, actually. It means that the supporters of radiation should chill out as well. Cool, all right, so any other studies you want to point out? GUEST SPEAKER: We had a couple of abstracts. MICHAEL SHORT: Yeah, let's see. GUEST SPEAKER: But I don't-- I'm not sure. AUDIENCE: [INAUDIBLE] GUEST SPEAKER: OK. AUDIENCE: Some of the other ones don't compare hormetic models. But they look at-- they say [INAUDIBLE]. It's like-- GUEST SPEAKER: Do you want to come up? AUDIENCE: Yeah, this one says [INAUDIBLE].. GUEST SPEAKER: All right. AUDIENCE: [INAUDIBLE] AUDIENCE: [INAUDIBLE] AUDIENCE: It basically compares threshold models with no-threshold models in [INAUDIBLE].. AUDIENCE: [INAUDIBLE] So perhaps hormetic is still better for you, but they-- the [INAUDIBLE] was good enough with [INAUDIBLE].. MICHAEL SHORT: So what they're saying is the-- the choice of model really doesn't matter, as long as it fits through the data that we've got. And it seems to be, again, what happens in the low-dose regime is less important, right? AUDIENCE: And it will-- they were satisfied when it fell from the [INAUDIBLE]. MICHAEL SHORT: So they're saying the best estimate of this-- interesting. AUDIENCE: They prefer no threshold [INAUDIBLE].. MICHAEL SHORT: That's funny. "If a risk model with a threshold is assumed, the best estimate is below 0 sieverts. But then how is their confidence interval from-- oh, less than 0 to 0.13. They don't quantify how much lower it goes because a negative dose doesn't make sense. No. So, yeah, it's a strong conclusion. But it looks-- looks fairly well supported to say that we can't say with those confidence intervals that they give if there is or isn't a threshold. Interesting. What do you guys think of this? So what would you delve into the study to try to agree with or refute this claim? AUDIENCE: They use a linear quadratic model only, it looks like. So they're not considering any of the other proposed models, which is a little-- maybe not sketchy, but it just seems like it'd be very easy to consider other models and why didn't they do that. MICHAEL SHORT: Sure. You know, what no study has gotten into yet is, what's the mechanism of, let's say, ill effect acceleration. This is something that, at least at the grad school level, we try to hammer to everyone constantly is not just what's the data, but what's the mechanism. What's the reason for an acceleration of ill effects? So if you guys had to think with increasing radiation exposure, let's say we wanted this linear quadratic model idea, what could be some reasons or mechanisms for an increased amount of risk per unit dose as the dose gets higher? Yeah? AUDIENCE: Well, your body [INAUDIBLE].. But then-- so at some-- you get more dose-- you get more dosing [INAUDIBLE]. It just keep fixing itself. And once you get past a certain point, then it can't [? fix itself ?] [? fast enough. ?] The additional damage keeps snowballing events. And they're giving it more damage to curb more radiation because you would run out of-- of various [INAUDIBLE]. MICHAEL SHORT: Sure. Works for me. Yeah, I like that-- the idea there was that you've got some capacity to deal with damage from radiation. And then once you exceed that capacity, you don't also-- with a higher dose, you don't also ramp up your capacity to deal with that dose. So in the linear region, let's say, you're somewhat absorbing the additional ill effects of dose by capacity to repair DNA or repair cells. Then once you exceed that threshold, you're beyond that point. So that could be a plausible mechanism for why there could be a linear quadratic model that could be tested, certainly with single cell or multi cell studies, like these-- these radiation microbeams or, you know, injecting something that would be absorbed by one cell [INAUDIBLE] irradiated, and seeing what the ones nearby do. So you could count that as number of mutations, number of cell deaths, anything, something that could be quantitatively tested. So that's pretty cool. I actually quite like this study. It's awfully hard to poke a hole in-- in the logic used here. The claims aren't outrageous. They're saying, this is what the data is saying. If you change the model, you can or not have a threshold and still get an acceptable fit. Can we actually look in the study itself? One thing I want to know is, what sort of-- do they do meta analysis, or did they-- yeah, so this was on the Japanese atomic bomb survivors. So did they analyze previous data, or did they get their own. And then if so, what was the sample size? Somewhere it'll be, like, yeah, [INAUDIBLE].. So where [INAUDIBLE]. GUEST SPEAKER: Where am I-- where should I be looking for this-- MICHAEL SHORT: Probably further down in any sort of methodology section-- materials and methods, here we go. OK, here it is, 86,500 something survivors. Oh, yes, with lots of follow up. AUDIENCE: But how are you able to determine the dose? Like-- MICHAEL SHORT: That is a good question. AUDIENCE: Because especially for-- if we're looking like low dose, and you're estimating, it's very easy to, like, estimate wrong, or, like, because then-- then it calls into question you have-- [INAUDIBLE] modeling they're using. MICHAEL SHORT: Mhm. So that's a great question is, how do they know what those people die? So how would we go about trying to trace that? This is when you dig back in time. They reference this, the data appears et al, whatever, whatever. So if you can go to Web of Science, pull up this Pierce et al Web paper. Look at cited references. Yeah, right there. And look for that 1996 Pierce study. Let's see if it has it. You can just like control F for Pierce, and we'll find it. Pierce and [INAUDIBLE]. Yeah, 1996, that's the one. GUEST SPEAKER: Where? Which one? This one? MICHAEL SHORT: [INAUDIBLE]. This is the 1996 one. Yep. So let's see if we can trace this back and find out how they estimated the dose of these folks. GUEST SPEAKER: So I just go to full text? MICHAEL SHORT: Yeah. AUDIENCE: How [INAUDIBLE]. MICHAEL SHORT: OK. So interesting, this LLS cohort. So there was some life span study, which was also referred to actually in the lecture notes as one of the original studies, says, who met certain conditions concerning adequate follow up. Although estimates of the-- OK, I want to see the next page. Although we estimate-- that might be what we're looking for. Number of survivors, let's see. AUDIENCE: It's 92%. MICHAEL SHORT: OK, here we go, materials and methods. The portion of the LSS cohort used here includes the same number of survivors for whom dose estimates are currently available, et cetera, with estimated doses greater than 5 millisieverts is [INAUDIBLE]. Table 1 summarizes the exposure distribution. So let's go find table 1 and see where the data came from. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: So it turns out that this is specifically-- DS-86 weighted colon dose in sieverts. Interesting. AUDIENCE: It [INAUDIBLE]. So how did they get that? MICHAEL SHORT: I don't know. But it sounds like we need to find this LSS, this Just LSS. So let's look at the things that this paper cites. Find this LSS. So I'm walking-- what I'm doing here is walking you through how to do your own research. And if someone comes to you with some internet emotional argument of, this and that about radiation is wrong, instead of yelling back louder, which means you lost the argument, you hit the books. And this is how you do the research. AUDIENCE: LSS-85, does that mean it was [INAUDIBLE].. MICHAEL SHORT: Probably. Version of-- title not available. I hope it's not that one. Can you search for LSS? Nothing? So let's go back to the paper and find what citation that was. If you go up a little bit, I think there was like a sup-- a superscript up to the last page, I'm sorry. There was a superscript on LSS stuff. AUDIENCE: So general documentation of the selection of LSS cohorts [INAUDIBLE].. MICHAEL SHORT: Thank you. All right, let's find references 9 and 10 in the-- yeah, [INAUDIBLE]. AUDIENCE: Can you click one of the References tab? MICHAEL SHORT: Oh, yeah, up there, References. Awesome! 9 and 10, OK. Let's find them. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: So let me show you quickly how to use Web of Science to get what you're looking for if I could jump on? GUEST SPEAKER: [INAUDIBLE] up here? MICHAEL SHORT: You don't have to, yeah. But thank you for being up here for so long and running this. So we're looking for-- where was-- the article was here. Went into references. I guess that was like the last-- I don't want to close all your tabs. Here we go. So GW, is that Beebe and Usagawa. So we'll go to Web of Science, look for authors, any paper with those authors. So you can do a more advanced search. This is where things get really interesting and specific. So ditch the topic. Search by Beebe and add a field, Usagawa. And then anything with these two folks in the author field that is indexed by Web of Science will pop up. Nothing. Did I spell anything wrong? Usagawa, of course. That's unfortunate. Last thing to try is Add Wild Cards. Interesting. This is actually one place where I would use Google to find a specific report. So because you're not looking to survey a field that's out there, but you're looking for any document that you can confirm is that document. Let's head there. Oh, it looks like Stanford's got it. That's something that references it. So at this point, we've hit the maximum that we can do on the computer. But if you finally want to trace back to see how were the Hiroshima data acquired, take these citations, bring it to one of the MIT librarians like Christ Sherratt is our nuclear librarian. AUDIENCE: He's a nuclear librarian? MICHAEL SHORT: And we have a nuclear librarian, yeah. MIT libraries is pretty awesome. So when you're looking for anything here in terms of research or whatever, there's actually someone whose job it is to help you find nuclear documents. And chances are, this is a pretty big one. So I wouldn't be surprised if we have a physical or electronic copy. So we're now like one degree of separation away from finding the original Hiroshima data, where we can find out how did they estimate that dose. So I think this is fairly-- hopefully, this is fairly instructive to show you how do you go about getting the facts to prove or disprove something, knowing the-- not just the physics that you know, but how to go out and find that stuff. Now, I did see a bunch of sources from the pro hormesis team. You still want me to show them? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: OK. Thanks. All right, you just want to hold this up while your-- let's go to your sources. OK, here we go. AUDIENCE: All right. MICHAEL SHORT: So walk us through what you found. GUEST SPEAKER: I just need to open them up. AUDIENCE: Go through them all, or-- MICHAEL SHORT: Yeah, let's do them all. GUEST SPEAKER: There's not too much. Kind of-- OK, so, I unfortunately was not able to find like too many pretty graphs, or data, or anything of the sort. But if you look up, what did I search for this? I think I just looked up radiation hormesis. And this is one of the articles that turned up. And it seems to be pretty well cited. You can see it's been cited 184 times. And kind of the quick look through the citations, from what I saw, seemed to be in support of it. And if you actually look at the abstract itself, where is it? AUDIENCE: [INAUDIBLE] GUEST SPEAKER: Yeah, well-- the last sentence is pretty excellent. "This is consistent with data both from animal studies and human epidemiological observations on low-dose induced cancer. The linear no-threshold hypothesis should be abandoned and should-- and be replaced by a hypothesis that is scientifically justified and causes less unreasonable fear and unnecessary expenditure." MICHAEL SHORT: You know what? I want to see what are the human epidemiological observations that they cite. GUEST SPEAKER: Yeah, so unfortunately, the MIT libraries does not have an electronic copy of this article. And I wasn't able to find one. But going through some of the citations for it-- MICHAEL SHORT: Before you do, could you go back to the article? GUEST SPEAKER: Sure. MICHAEL SHORT: I want to point something out. GUEST SPEAKER: Yes. MICHAEL SHORT: Can you tell if this was peer reviewed? GUEST SPEAKER: I do not know how to do that. MICHAEL SHORT: It appears to be a conference. GUEST SPEAKER: OK. MICHAEL SHORT: Not all conferences require peer review in order to present the papers. So while conference proceedings will typically be published as a record of what happened at the conference, we don't know if this one was peer reviewed and checked for facts by an independent party. Could you go up a little bit, and maybe there'll be some information on that? Oh, it did go in the British Journal of Radiology. OK, that's a good sign. So conference proceedings, you don't know. But in order to publish something in a journal, you do because then in order to get in the journal, things have to be peer reviewed to meet the journal standards, regardless of whether they came from a conference or just a regular submission. So, OK, that's good to see. So, now, what else you got? GUEST SPEAKER: And then one of the key sentences that I found right here, adaptive protection causes DNA damage prevention, and repair, and immune system or immune stimulation. It develops with a delay of hours, may last for days to months, decreases steadily at doses above about 100 milligray to 200 milligray and is not observed anymore after acute exposures of more than about 500 milligray. That's all pretty interesting. Like I said, unfortunately, I couldn't find the actual paper. So you can't really delve into some of those claims. But I tried to look at some of the citations that delved into them. And this is where my presentation gets a little bit shakier because I'm not particularly good at parsing some of this complex stuff very quickly. MICHAEL SHORT: Let's do it together. GUEST SPEAKER: All right. [INAUDIBLE] MICHAEL SHORT: If you could click Download Full Text in PDF, it'll just be bigger. GUEST SPEAKER: OK. MICHAEL SHORT: There we go. GUEST SPEAKER: So it seemed to me this one was more looking through the statistics of various studies. I'm not entirely sure. But I think the conclusion-- [INAUDIBLE] There we go. So the very last paragraph, "the present practice assumes linearity in assessing risk from even the lowest dose exposure of complex tissue to ionizing radiation. By applying this type of risk assessment to radiation protection of exposed workers and the public alike, society may gain a questionable benefit at unavoidably substantial cost. Research on the p values given above may eventually reveal the true risk, which appears to be inaccessible by epidemiological studies alone. MICHAEL SHORT: So what are they going on claiming [INAUDIBLE] versus not being willing to claim it? GUEST SPEAKER: So it seems like they're saying that at the current, there's not really a problem-- a statistically valid assertion of the linear no-threshold model and that the benefits to society gained from that are not worth the cost to society from that assumption. MICHAEL SHORT: So what sort of costs do you think society incurs by adapting a linear no-threshold dose risk model? GUEST SPEAKER: I mean, it could pose unnecessary regulations on like nuclear power, which could be arguably better for society. MICHAEL SHORT: Sure. Nuclear power plants emit radiation, fact, to use the old cell phone methodology. There's always going to be some very small amount of tritium released. The question is, does it matter? And if legislation is made to say absolutely no tritium release is allowed, well, you're not going be allowed to run a nuclear plant. That's not the question we should be asking. The question we should be asking is, how much is harmful? So I think that's what this study is really getting at is I'm glad to see someone say, you may have a benefit. But the cost is not worth the benefit. Like I-- I had a multiple of the same arguments with different people when they were complaining, well, how dare would you expose me to any amount of radiation at any risk that I can't control. I used to protest outside Draper Labs for 30 years protesting nuclear power. I was like, OK, how did you get there? They were like, oh, I drove. What? In a car? Do you even know the risks per mile of getting on the road, let alone in Cambridge specifically? No? Well, I was like, you should really consider where you put your effort? It's-- again, it's emotions versus numbers. I'm going to go with numbers because I tend to make bad decisions when I follow my emotions, as do most people because most decisions are more complex than fight or flight nowadays. Yeah? AUDIENCE: So a lot of the discussion just seems to be around like expanding [INAUDIBLE].. But a lot of the arguments don't seem to like really [INAUDIBLE]. But, yeah, like there's a certain extent, like, oh, you will see [INAUDIBLE].. MICHAEL SHORT: Yeah. AUDIENCE: [INAUDIBLE] are doing the same. MICHAEL SHORT: You make a great point. That's why I like your-- your chosen idea so much is, well, you didn't say chosen. That's what I-- yeah. Yeah, the question we should be asking ourself is not what is the dose-risk relationship, but when should we actually care. It's like both sets of studies have kind of come to the conclusion that, nah, right? AUDIENCE: [INAUDIBLE] dose doesn't really matter. GUEST SPEAKER: Yeah, and then I found this last one is a little bit more assertive. It's kind of just hitting the same nail on kind of the elimination of the linear no-threshold model. But then it does go on to make some more powerful claim right here. "These data are examined within the context of low-dose radiation induction of cellular signaling that may stimulate cellular protection systems over hours to weeks against accumulation of DNA damage." MICHAEL SHORT: Was this the paper cited in the other one that actually said hours two weeks? GUEST SPEAKER: I believe so, yeah. MICHAEL SHORT: OK, cool. GUEST SPEAKER: And then we can actually-- MICHAEL SHORT: [INAUDIBLE] this one? GUEST SPEAKER: Yes. We can look up the full text on Google Scholar. MICHAEL SHORT: That's OK. When you know what you're looking for, you can verify it. That's-- that's a useful thing for Google is like to find known content. But if you're trying to survey a field in Google, no. GUEST SPEAKER: That's not what I wanted. MICHAEL SHORT: Not yet. I'm sure-- I'm sure they're working on it. But they're not Web of Science yet. GUEST SPEAKER: All right. AUDIENCE: [INAUDIBLE] GUEST SPEAKER: Does anybody see a Get The Full Paper button? Oh, wait, right here, right? MICHAEL SHORT: Yep. That's it. GUEST SPEAKER: OK. Sign in? MICHAEL SHORT: Sounds like we don't subscribe to this. GUEST SPEAKER: Oh, I was able to get to it somehow. Well, yeah. AUDIENCE: I have another article supporting this claim, though. MICHAEL SHORT: OK. GUEST SPEAKER: But this one-- AUDIENCE: Submit it, or bring yours up, or whatever. GUEST SPEAKER: And then this one-- this one just had some nice data. If I'm going to summarize, it had-- it was looking at the amount of DNA damage instances compared normal background dose to like very, very low dose. And the very, very low dose was significantly less than the normal background dose. So that just kind of shows that like very low levels of radiation are like no worse for you than just background dose, which is interesting. MICHAEL SHORT: Cool. GUEST SPEAKER: Yeah. MICHAEL SHORT: I also want to make sure, do you guys have more articles you want to show? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: If you want to send it to me, I'll put it up here. GUEST SPEAKER: All right, I minimized because I didn't just want to leave your email. MICHAEL SHORT: Oh, I don't care. There's nothing-- GUEST SPEAKER: OK. MICHAEL SHORT: I'll bring it back up. So that's all the ones you sent? Cool. Actually, this one-- this debate is turning out a whole lot more interesting than previously because, well, because you're thinking. It's actually really nice to see this. And this is the-- AUDIENCE: [INAUDIBLE] MICHAEL SHORT: I'm not surprised. Don't worry. It's just pleasant to have a debate about something controversial with a whole group of people who are thinking and researching rather than shouting and like throwing plates. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Oh, no, if you want throw a chair, but I might throw one back. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: I wonder if anyone's gone out recently and has come up with all of the pro and anti hormesis studies and actually written a paper that says, that's not the point, because, really, what we're getting-- huh? AUDIENCE: You could write that. MICHAEL SHORT: No, I think you could write that paper now. AUDIENCE: Well, oh. MICHAEL SHORT: It would make for a pretty cool undergrad thesis, actually. Yeah? Maybe I can tell you a little bit about what an undergrad thesis actually entails because the seniors are all asking. But it's good for you to know ahead of time. So the main requirement for an undergrad thesis is it's got to be your work. That doesn't mean you have to have collected the data yourself, like done an experiment. But it has to be some original thought, or idea, or accumulation of yours. So trying to settle this debate and trying to figure out what would be a proposed chill region to say, forget the linear threshold or no threshold. That's for the basic scientists. If you are a government and want to legislate something that actually captures should people be afraid or not, defining that region would be a pretty cool study to do in the meta-analysis of lots of other studies, tracing back how worthy-- I mean, a lot of people refer to the Hiroshima data set because that's about the biggest one we have. In addition to folks with radon or folks that smoke, they were all exposed to the same thing in the relatively same area. So it's a good control group of people. But how was-- how were those doses estimated? You have to dig that up. And the act of digging that up and then recasting all of these new studies in the basis of everything we've learned since would make for a pretty cool undergrad thesis topic. So as undergrad chair, I wouldn't say no to that. Threshold and other departures from linear quadratic curvature in the same data set appears to-- is it the LSS data set? Let's try to get the full text. Awesome! I think it's looking good. Great! Now I've seen that name before. Interesting. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Interesting. They propose another model called a power of dose, a power law. And they say, depending on this-- there's little evidence that it's statistically different from one which is a what do they call one linear threshold quadratic threshold or linear quadratic threshold, OK? So, again, it seems to be yet another paper saying, I don't think it matters. Statistics says it doesn't matter. You could fit any model to this data. Let's get to the methods. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Interesting. So dose response for all non-cancer mortality in the atomic bomb survivors. So, also, in this case, it's mortalities not caused by cancer. AUDIENCE: Like, caused by radiation disease? Or is that caused by [INAUDIBLE]?? MICHAEL SHORT: So this would be-- I think what they're getting at is is there a response, or is there a change in the amount of mortality not due to cancer and the-- the-- AUDIENCE: Health benefits other than decreasing risk of cancer. MICHAEL SHORT: Or in this case, health detriments, right? Because in this-- you know, it never goes negative. You can't really tell in some cases. Let's see. Yeah, quite hard to tell, especially considering. And so at the low doses, what would you guys say for the low dose data? AUDIENCE: That doesn't matter. MICHAEL SHORT: I see a pretty well-defined chill zone right there, right? AUDIENCE: Chill zone? MICHAEL SHORT: We're definitely still in the chill zone at 0.4 sieverts of colon dose. And that's a pretty hefty amount of dose. You know, we're talking eight or nine times the allowed amount that you're able to get in a year from occupational safety limits. Once the doses get higher, things seem to get a little more deterministic or statistically significant. But, yeah, look at all the different models. The linear threshold, quadratic threshold, linear quadratic threshold, power of dose all goes straight through not just like in the error bars, but almost straight through most of the data points, except for the really far away ones. So this is a pretty neat study, showing, like, hey, the relationship does not appear to matter for doses of consequence. I would call 2 sieverts a dose of consequence based on our earlier discussion of biological effects. Luckily, it doesn't go much farther than that. You don't want a lot of people to have received doses beyond 10 gray. But this is pretty compelling to me to say, like, we can argue about what the real model is and what the underlying mechanism is, but is this a question we really should be asking ourselves when the total risk-- let's say, when the total risk to an organism reaches about 100%, once you reach a a dose where it doesn't even matter, then is this a question that we should really be debating in the public sphere? I love the outcome of this particular debate. Lots of statistics, don't have time to parse. Is there anything else, Chris, that you wanted to highlight in this study? AUDIENCE: This appears to [INAUDIBLE] comments on Professor Donald Pierce on [INAUDIBLE].. MICHAEL SHORT: Oh, OK, well-- AUDIENCE: Do you think it could be the same Pierce? MICHAEL SHORT: Maybe. It was a UK Pierce, I think. That's pretty cool. So anyone else have any other papers they want to show for or against or for our sort of collective new conclusion? Which is that we should just relax. Cool. Well, that went-- yeah? Charlie? AUDIENCE: I just had had a question, like, what would be like a posed use of radiation hormesis [INAUDIBLE]? [INAUDIBLE] MICHAEL SHORT: So let's say you could prove beyond a shadow of a doubt that a little bit of radiation exposure was a good thing. You might then prescribe radiation treatments in order to reap the benefits. I don't think there's been a single study that shows that there's like deterministic benefits from irradiating people. Some of the studies show that folks that have gotten exposed via various routes do show a lower incidence of cancer. So you could almost think of it like a vitamin, not an injectable vitamin. But-- so back-- there are lots of pictures online and stories of way up in the north in Russia and northern countries that expose you to ultraviolet radiation to stimulate the production of vitamin D in your skin cells because in the absence of an ingestible source of vitamin D, you make it naturally, but not when there's eternal darkness. So they'd actually have kids stand in front of a UV lamp, which does have ill effects. That can cause also skin cancers, but the benefits of the organism in generating vitamin D that you need for health are greater. So that might be an example. These-- these sorts of ideas are not that far fetched. If you put little kids in front of UV lamps, which you know can do bad things, but also does more good things, then who's to say it shouldn't happen for radiation? Well, no one's to say yet because we have no real conclusive proof that it is helpful. But that was the-- yeah? AUDIENCE: Have there been any mechanisms that [INAUDIBLE]?? MICHAEL SHORT: You mean in-- for radiation or for something else? AUDIENCE: For radiation. MICHAEL SHORT: The mechanisms of-- so that one study that Chris showed that-- what was the idea? That-- [INAUDIBLE]. The first one that you showed, the mouse one, and then the one that Chris mentioned where a little bit of radiation dose stimulated the immune system. That might be a potential good thing, where the damage or death of a few cells may stimulate the nearby ones to ramp up an immune response, thus snuffing out any other infection or problem that's coming up. That could be a use. But we have to be proved with much more confidence than anything I've seen today. So that's a good question. Yeah, like how would you use it? Use it like a vitamin, like a UV lamp, like a SAD lamp. Although, I don't think SAD lamps do anything bad, the Seasonal Affective Disorder, the most unfortunate acronym in the world. Yeah. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yes. I don't know if that would be easy to swallow. Yeah. Cool. All right, any other thoughts from this exercise? I think I'll do more interactive classes like this. It's good to hear you guys talk for a change. Cool. OK. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 5_Mass_Parabolas_Continued_Stability_and_Half_Life.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, guys. We're actually slightly ahead of where I thought we'd be at this point, so I'm only going to spend about half of today's lecture finishing up some new material on mass parabolas and stability. I also got a comment in through the anonymous box that said please leave a little bit of time after class for questions. So you can get them out right away, because I'm usually running off to teach some other class at, like, the IDC or some other building. So from now on, I'll try and leave about five minutes at the end of class for questions on today's material, and we'll make up with the second half-hour or 25 minutes of this class being for all the questions on the material so far in the first two weeks. But first I wanted to give a quick review of where we were Wednesday and launch back into mass parabolas, which are ways of looking at nuclear stability in relative numbers and even or oddness of nuclei. So you saw last time we intuitively derived the semi-empirical mass formula as a sum of volume, surface, coulomb, asymmetry, and pairing, or whether things are even-even or odd-even terms with the coefficients in MeV gleaned from data, and the forms of the-- and the exponents right here gleaned from intuition. Here we assume that the nucleus can be thought of like a big drop of liquid with some charged particles in it. And so the droplet should become more stable the more nuclei there are-- or the more nucleons there are. But then you have some more outside on the surface that aren't bonded to the others. All of the protons are repelling each other over linear length scales, because the radius of this liquid drop would scale like A to the 1/3. There's an asymmetry term, which means if the neutrons and protons are out of balance, there's going to be some less binding energy. And then there's this extra part that tells you whether the nuclei are even-even or odd-odd. And this works pretty well. If you remember, we looked at theory versus experiment where all the red points here are theoretical predictions, and all the black points are experimental predictions. And for the most part, they look spot-on. It generates the classic binding energy per nucleon curve that you see in the textbook and can predict from the semi-empirical mass formula. Zooming in and correcting for, let's say, just getting absolute values of errors, you can see that, except for the very small nuclei and a few peaks, which we explained by looking even closer, the formula, well, it predicts nuclear stability quite well on average. Again, this line right here, if there's a dot that lies on this blue line, it means that theory and experiment agree. And a deviation by a few MeV here and there, not too bad. But we started also looking at different nuclear stability trends, and we noticed that for odd mass number nuclei, there's usually only one or sometimes none stable isotopes per Z, whereas for even ones, there's quite a few more. And we're going to be now linking up the stability of nuclei versus what mode of decay they will take in order to find a more stable configuration. We looked quickly at the number of stable nuclei with even and odd Z and noted that these places right here where there are no stable nuclei correspond to technetium and promethium. There's no periodic table on the back of this wall, but behind my back on the other wall, there's that periodic table where you can see the two elements that are fairly light with no stable isotopes. That's what those correspond to. And the peaks correspond to what we call magic numbers or numbers of protons or neutrons where all available states at some energy are pretty much filled. And this goes for both protons and neutrons. So something with a magic number for both n, number of neutrons, and Z, number of protons, is going to be exceptionally stable. And we'll see how that's used as a tool to synthesize the super heavy elements that we believe should exist. And finally we got into these mass parabolas. I found this to be a particularly difficult concept to just get mathematically. If you remember, we wrote out the semi-empirical mass formula and said if you take the derivative with respect to Z, as we did it, you would get the most stable z for a given A. And we started graphing for a equals 93 where niobium is stable. That was just the one I had on the brain from some failures in lab earlier this week. We started plotting where those nuclei-- what is it-- the relative masses are for a fixed A. So let's regenerate that one right now, because we were a little fast at the end of last lecture. Then I want to generate one for a equals 40. And you'll see something kind of curious. I'm going to leave this up here for a sec. If you notice, for odd A nuclei, there's only one parabola, whereas for even A, there are two. Why that is, we're going to see when we look at the table of nuclides. But notice this nucleus right here can decay by either positron emission or beta emission to get to a more stable form. And there are many real examples, and I'm going to show you how to find them. So let's start off by going back to the table of nuclides, finding niobium-93. Just go up one more chunk. And there we are. Niobium-93 is a stable isotope. And if you want to see where it came from, you can scroll down a little bit and see its possible parent nuclides right here. So let's say that niobium-- we'll draw it right there-- is stable. We'll put it at the bottom of this parabola. And let's work down in Z. So we'll move to zirconium. Zirconium-93 ostensibly has a very similar atomic mass. But if you remember that 93 AMU is a rather poor approximation for the actual mass of all nuclei with A equals 93. In fact, if you look very closely at the atomic masses, zirconium-93 is 92.90. Niobium-93, well, it looks like we have to go all the way to another digit there. 92.906. You have to go to, like, even more digits 92.906375, 92.906475. So we go down in Z, and we've actually gone up in mass by looks like the sixth or seventh digit in AMU. If we go up in mass, we go down in binding energy. That tells us that there's something that's less stable. And if you notice, we went down by a very small-- or we went up by a very small amount of mass. Notice also that its beta decay energy is really, really small, 91 kiloelectron volts. So why don't we put zirconium just above? And we note that that decay will happen by beta, where the beta, let's say if we have an isotope with mass number A, protons Z, and let's just call it symbol question mark. In beta decay, we have the same A. We'll have a different Z. We're going to have to give these symbols. Let's call this parent, and we'll call that daughter. Plus a beta, plus an electron antineutrino. And what has to happen to that Z in order for everything to be conserved? It's the same reaction that we've got here for-- I'm sorry-- for zirconium. So you'd have to have one fewer proton to release one electron. And so that becomes same A but different Z. And this is the beta decay reaction. Let's go back a little farther. We'll look at the possible parent nuclide for-- did anyone have a question? AUDIENCE: Don't you need one more proton [INAUDIBLE]?? PROFESSOR: Let's see. Which direction are we going in Z here? That goes to yttrium. It actually looks like it's going down. Oh, yeah, for beta decay. I'm thinking-- I have the reaction backwards. Sorry. I need one more proton to account for the extra negative charge. You're right. OK. Yep, I was thinking backwards, because we're now climbing up the decay chain in reverse order. So this could have come from yttrium-93 with a much higher energy of three MeV. So let's put yttrium right here. That gives a beta decay. And we'll just go one more back to strontium-93 Has an even higher beta decay energy. So let's put that up here. And let's take a look at its mass real quick. The mass of strontium-93, 92.914 AMU. If we go back to niobium-93, now it's noticeably different to, like, four significant digits instead of six. 92.914 versus 92.906. And so that shows you that a tiny bit of mass and AMU corresponds to a pretty significant change in binding energy by that same conversion factor that we've been using everywhere. 931.49 AMU per big MeV per C squared. Let's see. Yeah. OK. So let's go now in the other direction, in the positron direction. Niobium can also be made by electron capture from molybdenum. So let's put molybdenum right here. Let's say that around half an-- what did we have here? It was like half an MeV. Like that. And let's see. Molybdenum-93 could have been made by electron capture from technetium-93 with an energy of 3.201 MeV, even more extreme. We'll go back one more, because there's a trend that I want you guys to be able to see. And this could have come from electron capture from ruthenium. I think I may have said rubidium last time, but Ru is ruthenium-93. And that 6.3 MeV, something like that. And this is where we got to yesterday. Now I'd like us to take a closer look at the decay diagrams, which tells us what possible decay reactions can happen in each of these reactions. Since we're right here on the chart, let's take a look at ruthenium turning into technetium by what it says, electron capture. So note that on the table, you can click on electron capture, and if it's highlighted, then the decay diagrams are known. It's not known for every isotope, but for a lot of the ones you'll be dealing with, it is. And you get something I have to zoom out for-- a lot, a lot of different decays. What I want you to look at is this one here on the bottom that I'll zoom in to. That should be a little more visible. So notice that if you want to go down the entire 6.4-something MeV, it usually proceeds by B-plus or positron decay, by either method. And as you go up the chain, as these energy differences get smaller, look what happens to the probability of getting positron decay. It shrinks lower and lower and lower. So there's a trend that the larger the decay energy for this type of reaction, the more likely you're going to get positron decay. And in fact, where we left off last time is in order to get positron decay, the Q value of the reaction has to be at least 1.022 MeV, better known as at least two times the rest mass of the electron, because in this case, to conserve charge and energy, you shoot out a positron, and you also have to eject an electron in order to conserve all the charge going on here. So there you have it. Now let's look at the lower energy decay of technetium to molybdenum, which had something like 3 MeV associated with it. So we'll click on technetium, and its energy is 3.2 MeV. Let's take a look at its electron capture. Significantly simpler. Already what do you notice about these positron to electron capture ratios? Anyone call it out. AUDIENCE: Electron capture is much more likely. PROFESSOR: Indeed. When the energy of the decay goes down-- notice that only these decays are allowed-- The electron capture suddenly becomes much more likely. But notice that it does not let you go directly from 3.2 MeV to 0. There is no allowable decay here. So this is probably a change of-- that's 3.2. That's 1.3. A little less than 2 MeV. All of a sudden, electron capture becomes much more likely, but positron decay is not disallowed yet. So we can say electron capture or positron decay right there. Everyone with me so far? So let's go to the really low energy one. We'll click on molybdenum-93 and see how it decays with an energy of 0.405 MeV to niobium. Anyone want to guess what's allowed? AUDIENCE: Electron capture only. PROFESSOR: Electron capture only. There's not enough energy for positron decay. And, indeed, it draws funny, because there's a metastable state. But if you scroll down here, there are two pathways allowed, both of which by electron capture. Decay diagram's quite a bit simpler. So we leave this one here by saying it can only decay by electron capture. Any questions on the odd A before we move on to the even, which is a little more interesting? Cool. OK. Let's move on to the even case. So for here, I'm going to go back to the overall picture of the table of nuclides. Click on around where I think potassium-40 is. Looks like I got there. And I want to point out one of these features. If you wanted to undergo decay change and maintain the same mass number, that's diagonally from upper left to lower right. See how all the isotopes here have a 40 in front of them. The really interesting part is as you cross this line, you go from stable to unstable to stable to unstable again. The colors here is dark blue represents stable, and dark gray represents long lifetimes of over 100,000 years. So this is one of the reasons you find potassium-40 in the environment. In fact, 0.011% of all potassium in you and everything is potassium-40. It's what's known as a primordial nuclide. It's not stable, but its half-life is so long that there's still some left since the universe began or whatever supernova that formed Earth got accumulated into the earth. But notice it can come from-- it can decay by a couple of different methods. So let's pick one of those stable isotopes, calcium-40, and put that as the bottom of the parabola on this diagram. So we'll put calcium here. And in a relative sense, we'll put a calcium point right there for its total mass. And it could have come from beta decay from potassium-40 or electron capture from scandium-40 40. So let's look at potassium-40. It can beta decay for about, oh, 1.3 MeV. So potassium is right here. Let's say it could beta decay with about 1.3 MeV. And we'll trace potassium back a little bit, figure out where would it have come from. Interesting. Doesn't tell us. OK, forget that. Let's trace calcium back and say there's scandium-40. And scandium-40 can decay with-- wow-- an enormous 14.32 MeV. Let's put that like here. Anyone want to guess which mode, electron capture or positrons, much more likely? AUDIENCE: I think positrons. PROFESSOR: Probably positron. Let's take a look. Oh boy. Another complicated one. But the whole way down, positron, positron, positron for all the most likely decays. You won't find a drawing to every single line. I believe that they know that at some point, drawing extra lines is futile, and they just all overlap each other. So I don't know exactly how the algorithm works, but it does draw up to some number of possible decay chains. If you want to see every single one, they are tabulated in a very, very long list down below. I'm never going to ask you to do something with all of these, because that would be insane unless it's a relatively simple decay, like that has two or three possibilities. And let's see. This could have come from electron capture from titanium-40 with 11.68 MeV. Wow. OK. Up here. And there's titanium. And let's go in the other direction. So I do know that potassium-40 can decay into argon-40. So let's jump there. Argon is a stable isotope too. So potassium-40 can decay into argon-40 by electron capture. OK, good. A more respectable 1.505 MeV. Is positron decay allowed? AUDIENCE: Yes. PROFESSOR: Yes. Why is that? AUDIENCE: [INAUDIBLE] PROFESSOR: Over 1.022 MeV. Yeah. Anyone have a question? No? OK. So we've got kind of a kink in our mass parabola. Yeah? AUDIENCE: Actually, yeah, so it's possible if it's over 1.022, but it's still very unlikely. PROFESSOR: That's correct. AUDIENCE: Until we get to these higher orders, like 10. PROFESSOR: Yep, so once the Q value's satisfied, it is technically possible. But if you had something with the decay energy of, like, 1.023 MeV, it would be exceedingly unlikely. So in fact, we can take a look at this. This, I would say, is also going to be on the exceedingly unlikely level, and we can take a look. So if we look at the decay diagram, we know it makes positrons. They're not even really listed. Interesting. So that process would not be allowed, but this one, because that's about 1.5 MeV, should be allowed. But since that branch ratio or the probability of that happening is already so low, I wonder if it even says. Yep, beta ray with a max or average energy of 482.8 MeV. We're going to go over why that energy is so low when we talk about decay next week. With the relative intensity of something with a lot of zeros before the decimal place. So there you go. Like you said, energies near 1.022 MeV, slightly above it, are extremely unlikely but possible and measurable. Cool. And then let's see what could have made argon 40. Could have been beta decay from chlorine-40. So chlorine maybe was here. And I don't think I have to draw any more. So we've got a funny-looking parabola with a kink in it, because really, you have two mass parabolas overlapping. I'm going to go back to the screen so that the diagram from the notes makes a little more sense. What we've kind of traced out here is that there's two overlapping parabolas here. There's the one with the-- what is it-- the odd Z and the even Z. So there you go. Just like the one on here, which I think is for a different mass number. Yep. 102. We get the same kind of behavior where things will mostly follow the lower mass parabola, but sometimes if something gets stuck here, it can go either way to get more stable. So I want to stop here for new stuff, because this is precisely where I thought we'd be at the end of the week. And in the next half an hour, I'd like to open it up to questions or working things out together on the board or anything else you might have had. Yeah? AUDIENCE: I have a question about the parabola things. PROFESSOR: Sure. AUDIENCE: There's-- you said multiple paths, so it doesn't have to do the little peak in the middle? Like, could it follow the lower parabola or the upper, or does it have to jump over? PROFESSOR: It's going to go in whatever way makes it more stable. So you're never going to have a nucleus that's going to spontaneously gain mass in order to get to a different path. You can only go down on the mass axis. But let's say you happen to be starting here at potassium-40. You can go down via either mechanism to the next mass parabola down. AUDIENCE: But if you were argon, you would go up. That's what you would do. PROFESSOR: That's right. If you're at argon, you're stuck. And in fact, if you want to take a look, what do I mean scientifically by "stuck"? I mean stable. Argon-40 is a stable nucleus that comprises 99.6% of the argon. So that's what I mean by "stuck" is stable. And if we look at the rest of the table of nuclides for similar-looking places-- so let's hunt near potassium-40. So notice potassium-40 right here has got stable isotopes to the upper left and the lower right. If we look back over here, manganese-54. Same deal. It's got a stable isotope to the upper left and a stable one to the lower right. How much you want to bet that when we click on manganese-54, it's got two possible parent nuclides-- or I'm sorry, two possible decay methods. So let's take a look. Manganese-54 can either electron capture and positron decay to chromium-54 or beta decay to iron-54. Let's take a look at one more to hammer the point home, and I think that'll probably be enough. Cobalt-58 right near nickel-58 and iron-58. Interesting. That one's not allowed unless there's more down here. So you can electron capture to iron-58, but there's no allowed decay to-- what was it? Nickel-58. OK. Let's look for more. Chlorine-36 has argon and sulfur on either side. There it is. Beta decay and electron capture. And how much you want to bet there's basically never a positron here? But basically, not actually never. So you get a positron 0.01% of the time and electron capture 1.89% of the time. Where is the other 98-and-change percent? Right here in the beta decay. So in this case, chlorine-36 will preferentially beta decay. If you also notice, it's a-- let's see. I don't know if that actually matters. But I am going to say it's more likely to beta decay. So when you sum these up, you get 100% of the possible decays. Let's see how many energy levels there are there too. Hopefully not too many. That qualifies as not too many. Yeah? Sean? AUDIENCE: Are the changes in mass always going to be attributed to beta decays or electron captures or positron [INAUDIBLE]?? PROFESSOR: They'll be due to those as well as some other processes, which we're going to cover on decay. But if you notice, I've been giving you a lot of flash-forwards in this class. We've introduced cross-sections as a thing, the proportionality constant between interaction probabilities. We're going to hit them hard later. I've also been kind of introducing or flash-forwarding different methods of decay. So there's also alpha decay. There's also isomeric transition or gamma emission. There's also spontaneous fission. This is the whole basis behind how fission can get working without some sort of kick-starting element. So maybe now's a good time to show you. Let's go to uranium-235 and see how it decays. It goes alpha decay to thorium-231 most of the time. And if you look how, it's not terrible. We can make sense of this. It also undergoes SF, which stands for spontaneous fission. So one out of every seven-- what is it? Seven out of every billion times, it will just spontaneously fizz into two fission products. And this is why if you put enough uranium-235 together in one place, you can make a critical reactor. In reality, you don't tend to want to put enough U-235 together to just spontaneously go critical. We use other isotopes as kickstarters. For example, californium, I think it's 252. Let's take a quick look. There we go. Californium-252 undergoes spontaneous fission 3% of the time. It's even heavier, even more unstable. So there is a reactor called HFIR, or the high flux isotope reactor at Oak Ridge National Lab. One of its main outputs is californium kickstarters for reactors. So to get things going, you put a little bit of californium in as a gigantic neutron source, and then you don't really need it anymore once it gets going. So it's one of the safer ways of starting up a reactor is put in a crazy neutron source, and then once it gets going, take it out or leave it in and burn it. I'm not actually sure which one they do. Yep? AUDIENCE: Is the name californium based on California? PROFESSOR: It is. When we get to-- now is a good time to introduce the super heavy elements since you asked. So a lot of these older elements were named after-- actually this is kind of a hobby of mine. So I don't know if you guys saw the periodic table outside. I collect elements, because if you're going to collect something, you might as well collect everything that everything else is made of. It's the same reason I went into nuclear energy. I started off course 6, or 6.1, specifically electrical. And I was like, well, I could be designing, like, the next screen for a cell phone, or we could solve the energy problem, which is the problem all others are based off of. So my whole life theme has been go to the source. That's why I came here in high school and never left. That's why I declared course 22. That's why I collect elements and probably is the reason for many other things which only a psychiatrist could diagnose. But let's look at some of the other elements. For example, yttrium. I think it has a isotope 40. Anyone know-- no, it doesn't have a 40. What about a 50? 60? 100? Whatever. At least it knew that Y was yttrium. Anyone know-- AUDIENCE: 89. PROFESSOR: --where this is coming from? 89? Seems high. Oh my god. You're right. AUDIENCE: I work with it. PROFESSOR: OK. Gotcha. You work with it. Awesome. Anyone know what this is all about, yttrium? There's a town called Ytterby in Sweden where large deposits of yttrium and ytterbium, or Yb, tend to be found or Db, named for dubnium. So let's say the really basic elements tend to come from Latin. Fe stands for iron, which actually stands for ferrum. Lead is plumbum. Gold is aurum. Silver, Ag, is argentium. I don't know if I'm saying that right. I never took Latin, and I've never heard it spoken, of course. And then a lot of the heavier and heavier elements as we go are being named for more and more famous scientists or places where they tend to be made like Db. I'm going to guess 260 for a mass there. Oh, nice. For Dubna in Russia that has got one of the few gigantic super heavy element colliders where they're constantly synthesizing and characterizing these super heavy elements. So finally they said, you know what? They've made enough of these in Dubna. Let's name one of the elements after them. Or Sg, seaborgium, for Glenn Seaborg. Or No, nobelium, for Alfred Nobel. Yep? AUDIENCE: I just have a question for the actual mass parabola. PROFESSOR: Uh-huh. AUDIENCE: Like, do the parabolas ever, like, reach each other? PROFESSOR: Do they intersect? AUDIENCE: Yeah. PROFESSOR: I've never seen a case where they intersect. That would make for a crazy situation indeed. However, part of what the homework assignment's about is to derive an analytical form for a mass parabola and then check the data to see how well it works. So for any cases where you have an even mass number, and you have either odd-odd or even-even nuclei, you can check those equations analytically to see if they'll intersect. AUDIENCE: And for the case of A equals 40, I'm not really sure what the top parabola is. PROFESSOR: So the top parabola for potassium-40-- let's take a quick look at how many protons and neutrons it has. Potassium has a proton number of 19, which means it has a neutron number of 21. So the top parabola is odd N and odd Z, where the bottom one is even N and even Z. Whereas for odd mass number nuclei, it has to be either odd-even or even-odd, else it would be even, which is a funny sentence when you say it all out loud. Yeah. So that's the idea here is that notice that the even-even parabola tends to be further down. All those nuclear magic numbers, 2, 8, 20, 28-- I'm not going to quote the rest. Those the little ones I know. All even numbers. So any other questions on these mass parabolas before we launch into super heavy elements? Yeah? AUDIENCE: [INAUDIBLE] the bump on the right? PROFESSOR: Uh-huh. AUDIENCE: How do you know that that's where [INAUDIBLE]?? PROFESSOR: Analytically or experimentally? Which question? AUDIENCE: Analytically. PROFESSOR: So analytically. Analytically there should be some isotope of-- well, not potassium. That wouldn't be allowed. So in this case, the stable element positions have got to kind of switch off, shouldn't they? So if that's potassium-40, that would still have to be potassium. You don't really have another choice. There isn't really a position there, is there, analytically? That's the interesting thing is that you can either be odd-odd or even-even for an even mass number. But you can't just take off one neutron from potassium-40, and then you've got potassium-39. Then you're on a different mass number. Or if you exchange a proton and a neutron, which you pretty much do in either of these directions. There's no way to get straight down here. AUDIENCE: Right. PROFESSOR: Yeah? AUDIENCE: For odd-odd, delta is negative, right? PROFESSOR: For odd what? For-- sorry? AUDIENCE: For odd-odd, delta is negative? PROFESSOR: Yeah, let's go back to that slide just to make sure. You mean the pairing term in the semi-empirical mass formula? AUDIENCE: Yeah. PROFESSOR: Yeah, so for odd-odd nuclei, indeed, delta's negative, which means lower binding energy, which means higher mass. And that's why we see it bump up on the mass right here. Yeah? And do you have a second part of the question? AUDIENCE: It was more so how to relate [INAUDIBLE] like the binding energy to that mass parabola. PROFESSOR: We can actually relate-- so we can relate the binding energy to the mass and the mass parabola analytically, because the binding energy is equal to Z times protons plus N times mass of neutron minus the actual mass of that same nucleus, A comma Z. So they're actually directly related, just negatively. So something with a higher mass is going to have a low binding energy, which means it's less bound and less stable. And indeed, the further up the mass scale we go, the higher those beta or electron capture or positron energies are. And there's another thing you can check too, which is the half-life. Half-life is what we'll be talking about on Tuesday. It's how long before an average amount of a substance has undergone radioactive decay. So let's look at some of these isotopes and start looking at half-life trends as another measure of stability. So potassium-40 has an exceptionally long half-life. So it's relatively stable. Let's take a look not at either the stable isotopes, but let's go up the mass parabola chain in one direction. Calcium-40, scandium-40. So let's take a look at scandium-40. Scandium-40 has a half-life less than a second. And it's got quite a high decay energy by whatever method you want to use. Let's go up to titanium-40. Anyone want to guess? Do you think the half-life is going to go up or down? Let's see if the half-life goes down. We know the decay energy goes up. Indeed. Half-life goes down from 182 milliseconds to 50 milliseconds. And let's say titanium-40 could have come from-- wow. Two proton decay from Cr-42 with a half-life of 350 nanoseconds. So as we go up the mass ladder and down the stability ladder, the half-life decreases, which kind of follows intuitively. Something that's exceptionally stable should have a half-life of infinity, and something that's exceptionally unstable should just blow apart instantly. Like, remember the first week of class, we talked about helium-4 grabbing a neutron, becoming helium-5, and instantaneously going back to helium-4. If you look at helium-5, its half-life is measured in MeV, or 7 times 10 to the minus 7 femtoseconds. So if helium-4 absorbs a neutron, it simply doesn't want it and gets rid of it in 10 to the minus 7 femtoseconds, which would tell us that it's exceptionally unstable. So I hope that's a long-winded answer to that question about what does it mean to be going up in the mass levels. Any other questions on mass parabolas or the liquid drop model or stability in general? Yes. AUDIENCE: For something that goes upwards [INAUDIBLE] just because the mass [INAUDIBLE].. PROFESSOR: So if you're changing one neutron to a proton in each case, you're switching back and forth from the odd-odd to the even-even mass parabolas. So if I were to redraw these dots more to scale, this would have to be on the odd-odd. And, well, let me draw them a little better. Yep. AUDIENCE: OK. PROFESSOR: That's on the odd-odd, and that's on the even-even. AUDIENCE: OK. PROFESSOR: Yeah. So excuse my poor drawing skills. But if you're switching one proton to a neutron or vice-versa, by definition, you're jumping back and forth between these parabolas. AUDIENCE: OK, thank you. PROFESSOR: That's a good question for clarification. You had a question too? AUDIENCE: Yeah, about the semi-empirical mass formula. When do you use that to find binding energy as opposed to, like, any of the other ways? PROFESSOR: I'm sorry. The semi-empirical mass formula is a good way to get an analytical guess at most of them. If you want an exact answer, always use the actual binding energy. AUDIENCE: So, like, how often is it used now? PROFESSOR: I would not say it's used much now except-- well, that's going to be one of your homework questions is this formula predicts that as you get heavier and heavier and heavier, nuclei should just continuously get less stable. And that was, as of when this was derived, let's say decades ago, we now know something different is happening. So if you look at the table of nuclides, you can sort of see some swells in the number of black pixels until it cuts off. And this region actually where we think super heavy elements happen, I want to jump to the actual table of nuclides, which I'll say is our snapshot of knowledge today, and go all the way to the top. And our knowledge kind of cuts off at these elements, which are, for now, temporarily named in a very uncreative way. We don't even know anything about them. Uun is probably going to have a proton number of what? 110. I don't know what the prefixes are, but UUU would be un-un-un 111. Probably has 111 protons. Beyond here, off the screen or probably up into the next room, it's predicted that once you approach the next magic number in nuclei, there should be an island of stability where it may not necessarily be totally stable, but the half-lives should go up again. And we should be able to synthesize super heavy matter. And if you actually graph neutron number versus half-life-- so notice how we were looking at half-life as a measure of stability. It starts to go up, then comes down, and then, to the extent of our knowledge, it is going back up again to the next predicted magic number. So what we think should be happening is half-lives should be continuously going up. And yeah? You had a question? AUDIENCE: Like, what do we do with these weird things? PROFESSOR: Well, whatever you want. It's going to be-- it should be dense as all heck, because nuclear matter is quite a bit denser than ordinary matter, and quite a bit is quite an understatement. So what would you do with super heavy matter? A lot of it could be used to probe the structure of matter. There's a lot about how the nucleus is constructed that we don't know. And beyond the scope of this course would also be an understatement. There's folks that are making their careers now on figuring out what are the forces between nucleons? Why do things spontaneously fizz at the rates that they do? You'll even hit a little bit of this in 22.02 when you can calculate the rough half-life for alpha decay using quantum tunneling through the potential barrier in a nucleus. And so the more nuclei we have to mess around with, the more data and real examples we have to study. But practical applications, well, I could imagine, we might find something denser than osmium. Osmium right now has a density of about 22 grams per cubic centimeter. This stuff, zirconium, is about 6.9 or so. Steel is like 8. Lead's like 11. Mercury is like 19. Have any of you ever played with liquid mercury before? This is a "don't try this at home, kids" kind of moment. My grandfather happened to be a dentist, so we happened to have a lot of mercury to mess around with. And it's, like, unintuitively heavy. It's unbelievable. A 1-pound jar is about that big. I think it would be cool if we could find something even denser. And then really, really dense matter happens to make really, really good photon shields and gamma-- not the Star Trek thing. I mean this in the actual nuclear physics sense. The best way to stop gamma rays for gamma shielding is just put more matter in front of it. And if we find a denser state of matter that's earth stable, you then have a smaller gamma shield. So there are practical applications too in radiation shielding. They also might make awesome nuclear fuel, because you better believe they're going to fizz like crazy. So who knows? Maybe we can-- I don't think that would be cost-effective, but it would probably work. So the way they're doing this is actually slamming calcium 48 nuclei into other super heavy elements that have exceptionally long half-lives. So if you can't read what the screen says, this here is berkelium, for Berkeley, with proton number 97, mass number 249. Let's take a look at the Bk-249, which happens to be way beyond uranium. So it's definitely not a stable isotope, but it has a half-life of 320 days. That means you can make a bunch of it, chemically separate it, make it into a target, and fire calcium-48 nuclei into it. Anyone want to guess, why do we use calcium-48? And I'll give you a hint and write the proton number for calcium. The isotope that we use is calcium-28-- or calcium-48. Anyone want to take a guess? Why start here? Why not just smash two berkeliums into each other? Calcium 48 happens to be exceptionally stable, because it's got two magic numbers. Its proton number is 20, one of those peaks of stability. And its neutron number is 28. So start with something super stable, something with a lot more binding energy to begin with, and you maximize your chance of making something with more binding energy that won't just spontaneously disappear. So there are reasons calcium 48 was chosen and not something heavier or lighter. If we go back to that article, you can see what happens here is you make some element 117, which has yet to be made, and it undergoes alpha decay until it reaches some rather stable-- you know, 17 seconds. That's pretty exceptional. And if you notice the trends here, as you decay, the alpha energy steadily goes down, and the half-life steadily goes up. And so what you do is you make a super, super heavy element, hoping that it will decay and rest in one of these islands of stability beyond the magic numbers that we know right now. Which I thought this was super cool, because this is actually happening now. Like, new elements are made. I think we've been seeing one a year or so for the past few years on average. There might have been a year when there was more than one announced recently. There's only a few places in the world doing them, but you can start to-- already with two weeks of 22.01, you can start to get a handle for why do they use the nuclei they do, and then what sort of things are you looking for? Decays with lower and lower energy mean you're already starting to get less steep on whatever imaginary mass parabola. Don't quite know how to draw this one, because it's beyond anything we know. And as the half-lives keep going up, you can tell that it's reaching a measure of stability. However, to get you started on the homework, for the open-ended ended problem-- I think I'll bring it up right now so you guys can take a look. So let's go to the Stellar site. Hopefully it doesn't call me. Good. And two problem set two. This is the way I know that everyone's seen the P set seven days before it's due, because I'm going to put it up on the screen so you can see it all the way at the end. Predicting the island of stability. Does the semi-empirical mass formula predict the island of stability? Well, let's start you off with the easier part of the question, which is yes or no. And I'm going to leave you to the why and the how. If we graph binding energy per nucleon versus mass number, the semi-empirical mass formula predicts something like this. What happens as we go beyond the realm of known mass numbers? Anyone? How should I extend this curve? What did you say, Alex? AUDIENCE: I don't know. PROFESSOR: Just keep going. Yeah. Does this predict an island of stability? I don't think so. So that's one of the few questions I'm asking you in this homework. And it's up to you guys. Use your creativity. Again, this is an open-ended problem. I'm not looking for a specific answer. I want to see how you think and how you would change this formula to account and actually predict the island of stability while still satisfying the mostly correct predictions from the elements we know. So-- sorry, go ahead. AUDIENCE: So should it, like, converge a little bit? PROFESSOR: Well, you're on the right track. If you want to show stability, you'd want it to maybe have a higher value right here. Higher binding energy per nucleon would correspond to a lower mass, which would correspond to higher stability. So how would you predict this island of stability? And then more specifically, how would you reconcile the inaccuracies in the semi-empirical mass formula? Because we know it doesn't work very well for all cases. There are some cases like right around here where it works great, and there's some like right here and right here where it really doesn't. You can get things wrong by like 10 MeV, which is pretty significant. You know, that's like four digits on the mass scale, like, the fourth decimal place. That's huge to a nuclear engineer. So that's something to get thinking about. And remember I did tell you that there will be some open-ended problems. I'm going to mark them as open-ended so you actually know. We're not looking for a right or wrong answer. This is one of those kinds of things where we want to see how you think and what do you think is missing. There's other hard problems where we give you the answer, because I'm not interested in you deriving some insane expression and getting it right. I'm interested in the derivation process. What are the steps you choose? What sort of assumptions do you make? What sort of terms can you neglect and say, that's in the ninth decimal place. I'm going to forget it. So in this case, we give you the answer, because we're going to grade you on the process. And you can use the answer to check your process and see if you're on the right track or not. For the skill-building questions, we actually do want you to come up with some sort of an answer like explaining the terms in the semi-empirical mass formula or modifying an equation to calculate something else. We will be looking for a right answer there. But those are questions to make sure that you get the basics of the material. If you can answer all of the questions in the first half of these P sets fairly quickly, let's say in three or four hours, you're totally on the right track. The hard ones is because this is MIT. And we want you to think beyond just knowing what's in the Turner book or the Yit book. Like I said, you guys are the leaders of this field. So any other questions on stability in general? Yes? AUDIENCE: Just a real quick reminder. When you say, like, even-even, are you talking protons, neutrons? PROFESSOR: Correct. So that would be even N and even Z or odd N and odd Z like in the reading and like on these mass parabolas. Yep. Any other questions? Yes. AUDIENCE: Is the only proof or reason that we say that there's an island of stability because the mass increases up to the point of unknown? PROFESSOR: There's a few-- so the question was, is the only reason people think there will be super heavy elements because the mass increases, right? AUDIENCE: Yeah. PROFESSOR: So in this case, the mass will always-- are you talking about now the total mass or-- AUDIENCE: Why is the idea that there is this island of stability? PROFESSOR: Ah, OK. AUDIENCE: If this doesn't prove it, do we have other reasons [INAUDIBLE]?? PROFESSOR: We have a few things to go on. There are a number of different aspects of nuclear stability that are all pointing to the same conclusion. One of them, you can see on this graph here. If you look at the alpha decay half-life as a function of neutron number, it doesn't just increase or decrease monotonically. It swells up and down. And it reaches a relative maximum near certain magic numbers. We can confirm that with the lower mass nuclei. It doesn't work for really low mass, because tiny things don't tend to undergo alpha decay. But there are patterns that we're simply recognizing and saying, well, if this is the next magic number, it should continue to increase. And I should mention too, this scale is logarithmic. So the top right here is like 10 to the 4 seconds. Just so you know, there are 86,400 seconds in-- what is it-- a day. And 3 times 10 to the 7 seconds in a year. So if this graph-- let's say for Z 111-- were to continue on its track, it should reach like 10 to the 9 or 10 to the 10, which could be like 100-year lifetimes or 100-year half-lives, which means definitely you can chemically separate them and do things with them. I don't know if they would be safe enough to deal with. But we also don't really know what's going to happen. You can see that there is some uncertainty, and things don't always follow the trend. Even the error bars are outside the dashed lines. But so we have this to go on. We have the alpha decay half-life. We also have the alpha decay energy. As you approach an island of stability, something that's more stable won't give off as much kinetic energy to its alpha particle. There is also-- for the ones that you can actually measure that live long enough, you can measure their mass to charge ratio and actually get a good picture of their actual mass. So we would expect the mass defect to follow a certain trend as we go up. The mass is always going to increase. If you add more nucleons, it's going to increase. But the mass defect, which is the real mass minus the atomic number mass-- if stability were to increase, do you think the mass defect would increase or decrease with more stability? Let's take a quick look at this. If A were to stay the same, a shrinking real mass-- and remember, lower mass means more stability-- would mean a higher or a low mass defect? It would mean a lower mass defect or a lower excess mass as you'd call it. So in this case, you would expect the mass of the nucleus to be smaller than its A if it was going more stable. And all of these trends work in the same direction, which is saying, OK, so we have the alpha energy. We have the mass defect. We have the half-life all pointing to the same thing that something should be more stable. And we have some patterns to go on, but our understanding is kind of incomplete. So-- yeah? AUDIENCE: So if there's super heavy elements, do they exist somewhere in space, or do stars make them, possibly? PROFESSOR: Ooh. AUDIENCE: Or are they-- PROFESSOR: Good question. AUDIENCE: --currently made? PROFESSOR: So the question is, if super heavy elements exist, do they exist out there in space? I think there would be a couple places they would exist. The source of most of the elements beyond iron is supernovas, where regular old fusion doesn't cut it anymore. When you hit the maximum of this binding energy per nucleon curve, you're at about iron 56. That's why stars tend to form a core of iron before it goes really bad in whatever way it does for a star. There are multiple ways. When you get a supernova, you have an insane explosion, and the core gets compressed from the outside, forcing fusion of heavy elements to happen. That's because you're putting in extra kinetic energy. So it's like you have an endothermic reaction where if Q is less than zero, how do you make that reaction happen? Add kinetic energy, which can come from a tremendous explosion outside of the outer regions of the star. So who's to say that some of these super heavy elements aren't formed in supernovas? I think they would be. But would they actually make it out to be part of Earth and then, let's say, live the 5 billion years that Earth's been around? We don't know if their half-lives are long enough. There very well may have been some 5 billion years ago or when the supernova was made. But we haven't detected any here on Earth. So we know that they're not 5 billion years stable. Rather, I wouldn't even say we know that, but we have a pretty good idea. That's a great question is like, are they naturally made? Probably. Yeah. Any other questions? I like these outside the material ones. We can take things beyond our known universe, start to explain them. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 10_Radioactive_Decay_Continued.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: I got an interesting question in the anonymous comment box, and I want to see how many other people agree with it the comment when something like, is everything in this field of nuclear science engineering computational in nature? Because so far, we've pretty much just thrown theory and simulations at you, we haven't done any experiments. So who shares this concern or wonder? So 1, 2, 3, 4-- OK, 5, yeah. So getting towards roughly half of you. So I can confidently answer, no, this is not a purely computational field. We just had to get you just enough science and physics so that you'd be able to understand some of the lab activities that we've got in store for you, one of whom, Mike Ames here from the Nuclear Reactor Lab is going to be helping you with-- actually, two of them he'll be helping with. MICHAEL AMES: Two? MICHAEL SHORT: Yeah. Well-- MICHAEL AMES: --your bananas. MICHAEL SHORT: The bananas and the thing that we dreamed up today, so-- MICHAEL AMES: This morning? OK. So you want an intro those-- MICHAEL SHORT: Sure. MICHAEL AMES: --and I'll, I don't know-- MICHAEL SHORT: Fill in what I get wrong. Also, Mike's going to talk about what we're going to do together, which is called NAA, or Nuclear Activation Analysis. There are many, many ways of measuring what sort of impurities may exist in materials, and this is among the most sensitive. We happen to have a nuclear reactor. So what we will be doing, what I'm going to ask each of you guys to do for a special assignment that not graded, except you'll need it for the problem set, so it kind of is, is I want each of you to bring something into me that weighs about 50 milligrams in one piece, fits in here, is not fissionable-- so if you have uranium at home, I shouldn't know about it. And we also ask that you don't bring anything in that is too salty, because sodium activates like crazy. And each of you, using your knowledge of radioactive decay that we learned Tuesday and today, and the Bateman equations in serious radioactive decay next Tuesday and Thursday are going to calculate what impurities exist in your sample. What is your sample? It's-- MICHAEL AMES: You're not going to be able to do the calculations. MICHAEL SHORT: We're going to make some estimates of-- MICHAEL AMES: Oh, OK. MICHAEL SHORT: The isotopes that you'll let us see. The shorts, yeah. I know we're not going to get every impurity, right? MICHAEL AMES: Well, we're not going to able to run it next week. MICHAEL SHORT: Yes, but I will want your materials next week. OK. So by Tuesday, I'd like each of you to come bring something in for what you'd like to know the elemental composition consisting of the following elements. Let's see. Problem sets, I think I've got it right up here. Let me just clone the screen. So you can see what we can look for. So this is provided to me by Mike. We're going to do what's called a short nuclear activation analysis run looking for any of the elements up on this list. Shorts 1 with extremely short half-lives, and shorts 2 with elements in the half-lives of hours. And Mike, I had a question for you now that we're live. Can we count arsenic in that list? Because it's 24-point-something hours. MICHAEL AMES: Yeah, we could do arsenic. MICHAEL SHORT: OK. MICHAEL AMES: It's not a great shorts element. You'd probably have to have something with a bunch of arsenic in it. MICHAEL SHORT: So if any of you guys have some food that you bought online and you don't know what sort of contaminants there are, or if you've got a piece of your fingernail and want to see if you are what you eat, or-- MICHAEL AMES: After telling you why fingernails would be a great sample, we might run afoul of the human subjects in research issue with fingernails. MICHAEL SHORT: What about dog fingernails? MICHAEL AMES: --believe it or don't. Ah, yes, your dogs are probably not-- MICHAEL SHORT: OK. So clip your pets' claws if you want to see if they are what they eat, or don't tell me what nail the thing came from, or get a-- slice up a piece of a peanut or whatever your favorite food is. Or if you want to see if there are any metal dyes used in your clothing, cut out a little 50 milligram square, it'll be like a fashion statement and an experiment at the same time, right? So we ask that it's about 50 milligrams, it's gotta fit in here, it's gotta be not that salty and not fissionable, and we're going to pack a couple of these in one of these rabbits, these polyethylene rabbits. We call it those, one, because everything in nuclear is named after animals and farm implements for some reason. Did I go over that with you guys the first day of class? Barns, shakes, pigs, rabbits? OK. So a rabbit is a little capsule. Do you pop open or you screw it open? Oh yeah, there's like a square nut at the top. Just a little capsule that goes through a pneumatic tube, kind of like the old bank machines, and it'll go firing into the reactor, sit there for a while, and get pneumatically sucked back out so that we can calculate the activation and decay of the isotopes within. And a pig is just a big heavy thing of lead where you keep pieces of things that you irradiated for shielding. So if you notice the sort of methodology theme here, farms, pigs, rabbits, barns, shakes. Anyone else you know any other farmy nuclear units? Yeah? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Or farmy anythings? AUDIENCE: They follow the [INAUDIBLE] detectors toads and bullfrogs. MICHAEL SHORT: So the detectors in [? nif ?] are called toads and bullfrogs. Why is that? AUDIENCE: Because the people who made the acronym [? like making their ?] acronyms, and [INAUDIBLE] bullfrog was not [INAUDIBLE].. MICHAEL SHORT: So actually bullfrog stands for something. AUDIENCE: Yeah. MICHAEL SHORT: That's pretty cool. OK. AUDIENCE: [INAUDIBLE] acronyms are [INAUDIBLE] MICHAEL SHORT: Yeah. AUDIENCE: [INAUDIBLE] acronym for acronym. MICHAEL SHORT: Oh yes, the acronym for acronym. As well as what I study, which is CRUD or Chock River Unidentified Deposits. It's the gunk that builds up on fuel rods. Well, I talked to a fellow from Chock River who took extreme offense to this detriment to nuclear power plants being attributed to his fine laboratory. MICHAEL AMES: Oh, I thought it was chromium-rich. MICHAEL SHORT: Oh, see, that would work, but they're not actually chromium-rich. Yeah. Cool. So Mike, do you want to say anything else? MICHAEL AMES: Yeah. I think I want to say-- so yeah. So-- MICHAEL SHORT: You want to introduce yourself, too? MICHAEL AMES: Oh yes, sorry. I'm Mike Ames. I work over at the Nuclear Reactor Lab mostly doing nuclear experiments, but I also run the NAA lab there. And I've been doing it for a while. So the idea-- the reason we've got these guys, it'll irradiate the samples in this. The easiest thing for me to do without losing any of your samples is [INAUDIBLE] [? there, ?] something gets radiated in this, I slip the top off, and you probably can't see that, but it's for a little poly bag. I'll dump your sample into the poly bag. So it's something that's like one piece that works the best. What I usually tell people, it's something that you could pick up reasonably easily with a pair of tweezers. That way if they drop it, I can actually pick it up with tweeers. But that kind of gives you a good size. No-- nothing to powdery, because the powder is going to spread around and then get contaminations. Yeah, and that list there, I guess you said you posted it. MICHAEL SHORT: Yep. The list has posted on the Stellar site. MICHAEL AMES: Yeah, yeah. So yeah, we'll see mostly these light elements, the guys with the astericies. I don't see that well. Yeah, gallium I'm probably not going to see that. So-- MICHAEL SHORT: Interesting, we found gallium in your dog claws or something. MICHAEL AMES: Yeah, magnesium, aluminum, titanium, vanadium, those are easy. Sodium, chloride, and potassium are easy. The manganese will come out nicely. Some of the elements of-- generally more interest down-- further down-- MICHAEL SHORT: Even further-- MICHAEL AMES: --chromium, those have longer half-lives, so I'm going to see those. MICHAEL SHORT: Yeah. MICHAEL AMES: Probably do like a five or 10-minute irradiation, let it decay for a little while, and then we'll do a couple of counts. Have you ever get over to the reactor? MICHAEL SHORT: We will be there a fair bit. MICHAEL AMES: OK, you guys are going to be there next week. MICHAEL SHORT: Yeah. So what do you usually use NAA for? MICHAEL AMES: NAA, well, the thing we've been using it for lately a lot is anything that's going to go in the core of the reactor, we want to analyze to see if there's any surprise elements-- to see how much cobalt is in a piece of steel that we're going to put in because people don't usually measure the cobalt, but it activates really well. And so that causes problems later on when we have to take some compartments for cobalt-60 or half the M that shows up and things that you don't expect it. MICHAEL SHORT: Mm-hmm. MICHAEL AMES: My past in doing NAA was all environmental samples. So we did a lot of trace element, heavy metal chemistry on atmospheric particulates, rain water, ice cores, lake sediments, crude oil, coal, fly ash. And so we would measure that whole stack of elements and those guys for environmental studies. MICHAEL SHORT: Cool. MICHAEL AMES: That was my time doing NAA. I don't know, I think that's enough. You want me back here next Tuesday or Thursday-- MICHAEL SHORT: Yeah, to explain a little bit more of the specifics of the process. MICHAEL AMES: --give you the five, 10 minutes. And if you guys could-- I mean, are you guys can be able to come to the lab when we do the shorts? MICHAEL SHORT: Depends on when you do them. If it's early November, then yes. MICHAEL AMES: Yeah, OK. So when I do shorts, I put two samples in one of these guys, they'll shlink into the reactor and out. MICHAEL SHORT: Which you guys should see. It's pretty cool. MICHAEL AMES: Yeah, you can watch that part. And then I run it down the hallway and throw each sample on a detector. And while those samples are counting, I run another rabbit. So it's kind of an all-day thing running up and down the hall every half hour. So you could almost come anytime during the day while I'm running these and get one full round in half an hour. MICHAEL SHORT: Cool. MICHAEL AMES: I think that's the whole story. You can hang on to that if you want. MICHAEL SHORT: Yeah. Thanks, Mike. So you heard the charge. Bring me your dog claws, your eyebrows, your skin flakes, your scabs, your-- MICHAEL AMES: Oh yeah, can we-- MICHAEL SHORT: --food pieces-- MICHAEL AMES: --from hair? MICHAEL SHORT: Oh Yeah, so no hair. MICHAEL AMES: We used to do a bunch of hair analysis. Hair is a pain in the neck because it-- MICHAEL SHORT: Staticky? MICHAEL AMES: --clings to everything and it gets stuck to parts. Yeah, we did some hair analysis for the superfund site in Woburn, and it was a big success, but it was not pleasant to do the work. MICHAEL SHORT: So don't bring us your hair, but bring us your skin flakes, your scabs, your dog claws, your food scraps, your-- MICHAEL AMES: No skin flakes. MICHAEL SHORT: Just don't tell him. MICHAEL AMES: Stuff that you can-- like I said, something that's like one good piece that you can pick it up-- MICHAEL SHORT: Yeah, get creative. MICHAEL AMES: --be great. MICHAEL SHORT: It doesn't have to be something that I said. As long as it's not fissionable or salty. MICHAEL AMES: And we might veto samples-- I do need to know what they are roughly before we throw it in the reactor, and we might end up vetoing some samples. MICHAEL SHORT: Yeah. So let's figure it out by Tuesday. That way if we have to veto, we have a while for you guys to find another sample. MICHAEL AMES: Or-- yeah. I don't know if I'll be able to run a sample from everybody. MICHAEL SHORT: We'll see. MICHAEL AMES: We'll see. MICHAEL SHORT: We'll see what we can do. MICHAEL AMES: Anyway, I'll see you all next week. MICHAEL SHORT: Thanks, Mike. So yeah, so Mike's going to be helping us do some nuclear activation analysis, and in addition, next week we're going to be counting our big bag of burnt bananas, because now that you're learning about radioactivity, and this-- when we go over activity and series decay, you'll have enough of the science to understand to calculate how to what is the radioactivity of one banana. And to do so, we need like 500 bananas to get enough statistics. So we'll be going to the reactor for that. We've also set it up so that next week and the week afterwards you guys are going to be manipulating the power levels of the reactor. So you'll actually get to sit in the control seat, raise and lower the control rods, and watch the power of the reactor change in ways you probably won't expect unless you're getting operator training. And all of that stuff is going to be used in the lab components of the problem sets. So you guys might have noticed there was some spinthariscope scope thing on problem set 3. Radiation protection did not want me taking smoke detectors apart and giving them all to you because that's distributing open radiation sources and I probably shouldn't do that, but we've got plenty of lab stuff for you to do to see what is actually hands-on in this field. And if you do want to know what else is hands-on, I have an experimental group and you're always welcome to come see what we do at the lab. There usually isn't an explosion happening, but there's usually something that 1,000 to 1500 Celsius temperature, blue uranium fluoride salts, nanonewton forces, extreme pressures, or what have you, we do a lot of it. So I wanted to give a quick review of where we were in radioactive decay last time. I think we left off somewhere around "Particle Physics Telescope Explodes" was my favorite BBC headline ever when we were talking about-- we've already gone over alpha decay, we've gone over beta decay, we started talking about positron decay and the neutrinos that come out of that, and this Kamiokande detector that is set up with lots of expensive phototubes to detect the cones of light left as neutrinos pass faster than the speed of light in water through water. And then there is my favorite headline. I believe we made it up to the end of positron annihilation spectroscopy or ways that you can actually use positron emission to look at the number and types of atomic defects in crystalline materials. And yep, that's where we left off, interested in PAS? Lots of papers to check out. In the meantime, let's look at one of the competing mechanisms for positron emission, which is electron capture. In this case-- so I will warn you, it's sometimes a little easy to get mixed up between electron capture, internal conversion, isometric transition, so I've left these slides on here, and I also took pictures of the board from last time and posted them on the Stellar site. So all the blackboards where we filled the boards, there's pictures of those. And I'm going to keep doing that. So if you've learned better just by looking and listening rather than writing everything, feel free. If you want to write stuff down, also feel free. So an electron capture, another way of, well, destroying positive charge would be for the nucleus to capture an electron. So either it can emit a positron, giving away some positive charge, or it can capture an electron, destroying one of the positive charges. And in each case here, we've got a proton that becomes a neutron and something. I'm won't be specific as to which one because positron and electron capture, well, two different but similar kind of decaying mechanisms. And then what you get is this hole where the electron used to be. And that's not a very stable configuration for an atom to have, let's say, one fewer electron then protons and especially to have a hole in the inner shell. So you get this cascade from straight-up from high school chemistry of electrons falling from one shell to the next and giving off characteristic X-rays-- that's me that cross that out there-- because you will find misinformation all over the place online, and someone might make a great figure and mislabel an electron-emitted photon as a gamma ray, and remember, we said gammas come from the nucleus, otherwise they're indistinguishable photons. And so in electron capture, you don't need much of an energy difference between the parents and the daughters, unlike positron decay where for positron decay to happen, you have to have Q at least equal to 1.022 MeV, which is the same as 2 times the rest mass of the electron. For electron capture, you don't. This can happen at just about any energy. As long as you can overcome just the binding energy of the electron, which is negligible compared to these sort of nuclear energy levels. And so this is the Q equation. Keep in mind, these deltas here are excess masses. So I'll put this up again, the excess mass is the real mass minus the terrible approximation of a nucleide's mass. And this way, excess mass and real mass are directly related, so you could plug in masses here, you could plug in binding energies by making everything with a minus sign, and so on. I think I've repeated myself enough for the Q equation stuff, would you guys agree? Yeah, OK. And so these are actually two competing mechanisms. So shown here is the decay of sodium-22 which we don't want to happen in our nuclear activation analysis because it gets pretty toasty. It can either proceed-- there's a kind of hidden part of the diagram that I drew in to make a little more sense. You start off with the nucleus at 2.8 MeV above the neon nucleus' energy level. You need 1.022 MeV to create the positron-electron pair, at which point you can emit the positron with a certain energy. You're left in an excited state, and the next thing we'll go over is gamma decay or Isometric Transitions or IT. That's the next method of decay we'll talk about. Or the nucleus can just capture an electron, getting to that same energy level and emitting the same gamma ray. So these are two competing mechanisms of decay. And then you might ask, well, when is one going to happen and not the other? Well chances are, the lower energy that transition is, the more likely electron capture is going to happen. So when you look at these energy diagrams, you can see that as the transitions get bigger, the probability of positron decay goes up and up and up. So you need 1.022 MeV to make the positron an electron, but the probability of positron decay very close to this is fairly low. Possible, but unlikely. Is everyone clear on these two competing mechanisms? So one way of reducing the number of protons is emit a positron, another is gobble up an electron. In the end, they make the same daughter products, but they go by different mechanisms. And they give off different bits of radiation which we can actually sense and measure. Cool. So on to gamma decay or isometric transition. These range from the dead simple, like technetium-99 metastable giving off a characteristic 140 keV gamma ray for technetium, that's the medical imaging procedure we've talked a lot about. To the ridiculously complex, like americium-241, which has a lot, a lot, a lot of different nuclear energy states, all of which release anywhere between 1 and a lot of gamma rays. And this is what's referred to as isometric transition. So we'll say gamma or isometric transition is like the same thing, they're just different words for it. These are called isomers. They've got the same number of protons and neutrons, so it's the same nuclide, but at an excited state. And we call it gamma decay because we emit gamma rays or photons. I think this one is the easiest one to understand, because the reaction goes something like-- let's say we have a parent nucleus with Z protons and A neutrons. Nothing happens. It's about the easiest nuclear reaction there is. Except you do give off a gamma ray. And we'll usually put a star or something to denote an excited state. So when you see a star in the reading over there on a nuclide where the charge would be, that's an excited energy state that will likely decay by IT or gamma decay. There is also a competing mechanism for isometric transition or gamma decay, and that's what's called internal conversion. In this case you can kind of think of it-- this isn't the correct physical explanation, but it's a perfectly good mental model, that the gamma ray would either just be emitted from the nucleus, at which point you would see it, and the energy of the gamma is the same as that Q, or it kind of hits an electron on its way out, ejecting that electron. So instead of finding a gamma ray, you may just get an electron emitted at an energy very, very close to that gamma ray. The difference between the gamma ray energy and the electron energy is its binding energy. Because if a gamma hits an electron on the way out, it has to overcome the binding energy of that electron, at which point the rest of the energy is just its kinetic energy. So again, I don't think that's the precise physical mechanism, but it's a perfectly good mental model to remember what this is. A gamma can either just get out on its own or it can hit an electron on its way out. If you hit the electron on your way out, just like in electron capture, then you get a larger shell electron falling down to the inner shell emitting an X-ray just like before. And then there's one other process I want you guys to be aware of. Has anyone here ever heard of Auger electron emission? Er, yeah. So in this case, instead of sending out an X-ray, you can think of it like the X-ray kind of hits the-- another electron on its way out. That's not the actual process that happens, but let's just think of it like that. And then that electron is ejected usually from a much outer shell. And we can actually use these Auger electrons because they have specific but very low binding energies to do imaging and elemental analysis of materials. So this is another one of those things where the stuff you're learning today is used in an Auger electron microscope up and Building 13 to do combined imaging and elemental analysis of materials. I want to skip back a sec, because let's say we have this decay diagram right here, a pretty simple one. Cesium-137, that isotope that everyone was worried about from the release from Fukushima. Can either proceed by just beta decay, or beta decay followed by an isometric transition. And shown here is a spectrum of all the different electron energies that you'll get out. If you remember from last time when we talked about, let's see, the energy of a beta particle emitted versus let's say the number of those particles emitted, if this is the Q value for that reaction, you don't always get a beta particle out at the Q value-- in fact, you never do. It looks something like this, where they'll be some, let's say, average or some most likely beta energy, which is about one-third Q. Depends on the reaction, but that's a good rule of thumb. So if you've got a 1.174 MeV beta particle, you're going to see a spectrum of electron energies ranging from 0 to 1.174. And you've got this other beta transition possible at about half an MeV, so you're going to see that same spectrum right there. And then there's these two, what's called the conversion electrons. That's evidence of the competing process with gamma decay. Which is to say that this gamma decay can either just get out, at which point you see a gamma ray of that energy, or that gamma hits an electron on its way out, knocking those electrons out. Does anyone know here what it means by K-shell or L-shell? If you do, just shout it out. So that there depends on the-- oh, that's correct. It's the energy level. So let's say we'll draw kind of a bore model of a nucleus, we'll call it N. And let's give it three electron energy levels. And let's say there's a couple electrons in the first shell and some electrons in the outer shells. And let's say this electron was struck on its way out by a gamma ray. So it's gone. At this point, you might have an electron fall from let's call this level 2 to level 1. And so this 2-to-1 transition is called the K-transition. Don't ask me why the letters are the way they are. I probably have read it and have forgotten it because it wasn't that intuitive, but this is referred to as a K-transition. So you may have what's called the K-alpha or a K-beta line, that depends on if you have an even higher energy shell, but whatever this letter is, it tells you what an energy level the electron is going to. So the K lines would be here. The L-line-- yeah, I'm sorry. Let me back up and say that again. So the idea here is that if this the-- but let's see. Which one is this? 0.662. So if the gamma ray is at 0.662 MeV, which would be about there, notice that these K-shell and L-shell lines aren't quite 0.662. That's because they have to overcome the binding energy of the electron to get out. So to jump back to this diagram right here, the gamma ray loses a little bit of energy in freeing the electron, the rest of which can become kinetic energy, which is why you can see that the electron, let's say, was ejected from the K-shell here. And-- yeah? AUDIENCE: So internal conversion is the actual process of-- when we figure out the process of a gamma ray hitting the electron? MICHAEL SHORT: I will say that internal conversion, you can imagine a mental model of the gamma ray hits an electron on its way out. AUDIENCE: But it's not the actual-- that's not physically happening? MICHAEL SHORT: Physically it's more complicated. AUDIENCE: OK. MICHAEL SHORT: Yes. AUDIENCE: Looks like a [INAUDIBLE].. MICHAEL SHORT: Yeah. Yep. So if you want remember like what's what, I would say, just remember this diagram right here. Yeah, Kristen? AUDIENCE: You get a gamma ray and an electron or just the one-- MICHAEL SHORT: Just the-- you get just the electron, good question. Is the gamma ray is effectively absorbed in freeing the electron from its bound shell and then imparting kinetic energy. Yep? AUDIENCE: Is the Auger emission is when another electron hops down an energy level and that [INAUDIBLE] electron? MICHAEL SHORT: That's right. So that's correct. So I'll spend the next couple of slides going over what Auger electron emission is, but not till we finish the easier stuff, because Auger is a little complicated. Did I see another question out here? Cool. So again, all the competing methods for gamma decay, one, the gamma can just get out; two, the gamma can knock out an electron from, let's say, the K-shell or the L-shell or the M-shell and so on and so on depending on what elements you have. So I'll just label these. Like this would be like the K-shell, this would be the L-shell, this would be the M-shell. Now I want you to notice something, too. The K-shell electron ejected from the innermost electron is slightly lower than the L-shell electron. Why do you guys think that is? Let's look at the energetics for this process, right? The electron energy level is whatever the gamma ray level is minus the binding energy. Which of these two electrons, the K-shell or the L-shell, do you think is going to be more tightly-bound? The K-shell. The innermost electron is more bound, so it takes away more energy from that gamma ray. Let's say this is the gamma. It takes more energy to eject an electron from the K-shell than the L-shell. Or in other words, the gamma loses less energy ejecting a less-bound electron. So that's why you see these. If there were an M-shell, there-- I don't know what element this was drawn for, but let's say-- oh, for cesium. So there probably is an M-shell. It's just that as you get down in energy levels, the probability of finding an electron from these outer and outer energy levels gets way and way less likely. So you'll usually just see a K or an L-shell electron. And if I asked you to draw one of these on a problems set or an exam, just drawing the K and the L-shells is perfectly sufficient. Because that way at least you'll know that there's a couple of possibilities. Yeah? AUDIENCE: What are the two curves on the right graph there? MICHAEL SHORT: On this one? AUDIENCE: Yeah. MICHAEL SHORT: So these two curves represent the probability of finding an electron emitted at that energy. So this curve right here where you get a maximum beta energy of 512 keV comes from that beta decay. And the maximum for the 1.174 comes from this beta decay. So the total curve would be the sum of each of these four things. If I just said draw the total probability of detecting an electron at that temperature, you just add those up. I saw two other questions or were they the same thing? Yeah? AUDIENCE: Yeah, I was going to ask what the [? lower ?] [INAUDIBLE],, but his question made me think. So is it kind of like the area under both those curves sums to 1? Because the probability given that like 0.512 max is 95% of-- MICHAEL SHORT: Ah. So the question was is the area under each of these curves 1? Not with this scale. Here we're just showing a relative number of electrons. So if you want to find what's the total probability that cesium will emit an electron of each energy, if you integrate under all of these curves, that will sum to 1. If you're looking at just one of these decays and you're saying, if cesium undergoes this decay, what's the probability of each of these energy levels? Then you only integrate under the relevant curve. What's more practical is usually what's the probability of finding any electron at any energy from cesium? Then you take into account all the possible decays, draw all the curves independently, add them up, and you get the total probability function whose area will be 1? Yes? AUDIENCE: So maybe you could say that all L-shell electrons would be ejected if [INAUDIBLE] in the K-shell due to the fact that it's less tightly-bound? MICHAEL SHORT: Wait, can you say the last part again? AUDIENCE: Shouldn't we say the L-shell electron will be ejected if [INAUDIBLE] energy in the K-shell due to the fact that it's not as tightly-bound? MICHAEL SHORT: That's correct. So the L-shell electron in the second shell is less tightly-bound than the first one, which is why it's ejected with more energy. It doesn't take as much of the gamma's energy to get it out. If you were to get, let's say, the outermost electron ejected, which happens and Auger, which we'll go over next, it can take up anywhere from 1 to 7 eV. Really, really low energy. That's what we call the work function or the energy required to make the outermost electron out. So any other questions on this before I go Auger and show you that process? Cool. So then let's get into what is Auger electron emission? It's exactly-- Luke, is that what you said? Were you asking about it? AUDIENCE: Yeah. MICHAEL SHORT: Yeah So it's exactly like what Luke said. Normally if you have a hole in a lower-level energy shell, an electron from a higher shell will fall down to fill it, emitting an X-ray. A competing process for this is another electron from a very similar energy shell will get ejected instead. The mental model for this, which again, is not the true physical picture but it's perfectly fine to think of it like this, the X-ray hits another electron on its way out. And you can look at the energetics accordingly. Where for Auger emission, let's say the kinetic energy the Auger electron is the difference in the final and initial electron energy states minus the binding energy of the Auger electron, which will usually be very low, because the Auger electron that's emitted is usually one of the outer shell electrons. So to help make this a little more concrete, I wanted to show-- I will do a little calculation example-- it's just addition, so it's not that hard. So let's say we were measuring-- I don't even know what this is. And we started to see some characteristic Auger electrons for copper, platinum, carbon, and oxygen. And the question is, why do we see oxygen coming out right there at about 501 eV? Very, very low energy compared to what we've been talking about. We can actually look at the binding energies of some of the different electrons in oxygen. Luckily there aren't that many electrons in oxygen. The first-- the only-- well, the only K electrons, let's say, have a binding energy of 532 eV, the L1 is 24, and then the L3's something else, and one of the other p orbitals is 7 eV. And so the formula is pretty simple. It's just 532 minus 24-- that's the difference between the final and initial energy levels-- minus the seven to free that outer electron-- comes to 501 eV, which is exactly where you see the Auger line for oxygen. So when I ask you what are all the possible things that you could see during the decay of something, something, something, if I were to show you this curve and ask what's missing, what would you do? Where would you draw the Auger electrons on this curve? Yep? AUDIENCE: Like almost on the vertical axis because it's [INAUDIBLE]. MICHAEL SHORT: Yep. 500 eV would be like, I don't know, one pixel away on this graph. But if you want a complete answer to this question, you've got to take into account all the possible beta energies for all the possible beta decay mechanisms; all of the possible conversion electrons for whatever gamma come out-- in this case, there's only one gamma; and Auger electrons which could compete with X-ray emission. So everyone clear on that? Yeah? So the question is if you eject an electron from one of the inner shells, does that eventually create an Auger electron, right? It can. These are competing processes. So to the X-ray can just escape during that transition, or we'll assume that it hits another electron on its way out and emits an Auger electron. So these are also competing processes. So you'll see one or the other-- in reality, you'll see a lot of both with different probabilities. Because you're usually not looking at one atom, it's usually looking at a lot. Cool. Any other questions on IT as an isomeric transition or internal conversion or Auger, what this is all about? Cool. Wanted to give you one note, too. That these Auger electrons are really, really low energy, which means the only ones that get out of the material are in the top, like, tens or so nanometers of the material. So it's a very surface-sensitive technique. So if you want to do a really detailed surface analysis or profiling, you can scan an electron beam across the sample and then collect the Auger electrons that come out-- skipping ahead to our calculation-- and get an elemental profile that'll tell you how much of each element is where depending on how many of its Auger electrons you can count. And it's pretty-- it's a pretty cool technique. There are just machines that do this now. Any interest in seeing one of these at the Center for Materials Science and Engineering, because we could try to arrange that, too? Cool, OK. I'll see what I can do. That'll be fun. And the last decay that we haven't talked about and did not show up in our generalized decay diagram from last time. We did talk about neutron decay, there's one other one that probably wouldn't fit on this diagram. Does anyone know what it is? Spontaneous fission. So this can happen-- that's right. This happens with very heavy elements. Then usually the heavier it is, the less stable it is, the higher probability this is at which the nucleus can once in a while, it-- nuclei just explode sometimes. Giving off two fission products, any number of neutrons-- usually between 1 and 3, a couple of gamma rays, some anti-neutrinos, and a whole bunch of other crazies. And so that, of course, doesn't fit on the diagram, but it is another type of decay that I'll ask you guys to analyze on the homework. And here's a hint-- you already analyzed part of it in problems set 1. I'll ask you to go a little deeper in problem set 3. So anyone have a question for myself? Cool. OK. Bear with me because I skipped back to like slide really far away. Oh cool. That thing actually works. Right to the summary. So in summary, the radioactive decay processes are more-- I think the energetics are pretty easy. The formulas aren't that hard to remember because most of them are just parents minus daughters minus something. But what I do want you to remember is which mechanisms compete with which other ones and why. And if I were to tell you, draw me a spectrum of photons that you may see from the decay of caesium-137, or draw me a spectrum of electrons, you'd be able to draw what that is so that when you go do lab number 4 and we count our big bag of burnt bananas and you know that there's potassium-40 in there, you know what peaks to start looking for. Because you're not just going to see-- let's do a little flash-forward to detectors now. And some other stuff that nuclear engineers actually do on a daily basis. Let's say you're counting the energy of photons as a function of, let's say, the number of photons that you count. You're never-- you're almost never just going to see a lone potassium-40 peak like that. It's very, very rare that you would just capture the gamma ray as is. There's going to be a lot of other things that will go into it, which I'm not going to give away yet because we're going to go over photon interactions in like a week or two, but you do have to think about, well, what other X-rays might you see? What if the gamma-- what if this gamma ray hits an electron on the way out, and then you end up with some X-rays? Let's say you might have some K-level X-rays and some L-level X-rays and maybe some Ms? These could all be possible as well. I'll just label those real quick. Does anyone know how to find these energies? What those X-ray levels are? Anyone ever heard of the Lyman series? The emission lines from ionized hydrogen or anything like that? That's kind of the simpler case of it. The idea here is if you want to figure out what's the wavelength of light that's going to be emitted, it'll be that's this thing called the Rydberg constant, a more complex formula for which I have in the notes, times 1 over your final shell squared minus 1 over your initial shell squared. So the idea here is that you can look up these-- this Rydberg constant for any element that you have, and there's actually-- there's what's called an R infinity constant and I think you just multiply by Z-- I forget what power it is, but I will get it for you next time. And then it's just a matter of the squares between the final and the initial shell levels where n can vary from 1 to theoretically infinity. Realistically I've never heard of anything beyond a g orbital, so let's just say it's that. Or something like that. I'll leave the infinity there. That's technically correct. OK. And so all you need to do is either look up or calculate this constant for your element, and then plug-in the numbers of the shells, and you know what sort of photon energy you're going to get out. And to make that easier for you, I think now is a good time to introduce the NIST X-ray tables. So I want to make sure you can see my screen. And I'm going to show you something that's on the Stellar site which will help you figure this stuff out. 20.2.0.0.1. Good, you can see. Probably will make log in. And all the way at the bottom of the material section, there's the NIST X-ray Transition Energy Database. So for example, you can look at-- I don't know, we were looking at caesium, right? Let's find caesium. And you can start to look at all transitions-- let's look at the simplest one. KL1. An electron going from shell number 2 to shell number 1. Get transitions, and you end up with a table of these energies in electron volts. So if I were to ask you, let's say, what sort of gamma rays might you see coming off of caesium, that would be the most likely one where you're more likely to eject an inner-shell electron, and it's most likely that a number 2 shell electron will fall down to a number 1 or from the L-shell to the K-shell, whatever you want to call it, or the-- some level orbital to some other-- there seems to me like eight different letters that describe the same thing. I hope you guys get the idea. If you want to look at all the possible transitions, I have to zoom back out again. Yeah. Let's just scroll through it. But what I want you to notice is that all of the KL transitions are within like a couple hundred eV from each other. So this is like the first or second or third L-shell electron falling to the K-level. So they call it the KL1, KL2, KL3. They're all from L-shell. They all just might be one of the different electrons occupying that shell, which is why they're not that different. So if I asked you, draw all of all of the X-rays and gammas coming out, I don't want to see a line for every single level. I'm very happy for you just to say, this line represents the KL series, this line represents the KM series, all the things from shell 3 to shell 1. Notice also that because it's final minus initial squared, which I covered up, falling from an even further outer shell to the same inner shell should give you a higher energy, which it does, by like 5 keV. And then the KN levels, another 500 eV up. They don't have KO's. Interesting. But they have the K-edge. Anyone know what the K-edge is? Exactly. It's level infinity, which would mean an electron from somewhere else, right? So this in effect is like the energy it takes to ionize a K-electron or the X-ray that you would get from an electron falling all the way into the K-shell. So this is your kind of level infinity. Notice, it's not that different from level 5. That's why I wrote 6, I erased it for theoretical correctness, but in all practicality, you won't see much else. Does anyone have a question? Thought I saw a hand. OK. Let's look at the L-series-- oh yes? AUDIENCE: Sometimes you see L-alpha, L-beta. MICHAEL SHORT: Yep. AUDIENCE: Is that the same as the subscript 1, 2, 3, and 4? MICHAEL SHORT: Yeah, that's another-- AUDIENCE: --corresponding-- it's just a notation. MICHAEL SHORT: Exactly. Yeah, there's the L-alphas, the L-betas, or you may see L-alpha 1 and 2 in L-beta 1 and 2. So L designates that it's going to shell level 2. Alpha or beta is like LM or LN. Again, I think I've ranted about notations before. Physics is notorious, because whoever-- to describe something, and it gets enough people to infect with the notation that you decide it sticks. But the main patterns to look at, then, let's say the L1 M whatever. This is from level 3 to level 2, and notice how much lower an energy these are. And the L-edge. 5 keV compared to like 36 keV. And all of these sort of different transitions you can calculate with this formula. And this has just kind of tabulated this formula for you. So whatever you're more comfortable doing is putting this in Excel, look it up on NIST, your choice. Yeah? AUDIENCE: If we were to calculate that by hand-- MICHAEL SHORT: Mm-hmm. AUDIENCE: --what would you use for like NF? Say, like, 5? MICHAEL SHORT: Oh, what is the largest NF, you mean? AUDIENCE: Yeah. What numbers actually go in NF [INAUDIBLE]?? MICHAEL SHORT: For that I'll put my practical thing back there. You'll never see it much higher than 6 unless you're talking about actinides and super heavy elements with even crazier shell levels. But you'll put the integer shell number, regardless of whether they call it L or S or whatever. Just put the number here, and that will give you the-- a pretty good approximation of the energy transition. Does anyone remember this from high school? I hope they're teaching this now. AUDIENCE: We learned this in [INAUDIBLE] MICHAEL SHORT: Oh, they did in 5-111? Oh, that's good to hear. What about 3-091? Anyone take that? No one took 3-091? Wow, OK. Usually it's like half and half or so. Cool. And let's see, how far does it go? All the way to the L3 N's in the L3 edge. So that's the biggest element they have talking about ridiculous transitions. Yeah. So notice also, as you go up in-- that's number 100. So this is the heaviest one they have, so most likely to have the largest number of levels. So here, the KL1 is like 114 keV, sometimes indistinguishable from some of these smaller nuclear energy level transitions. So remember I said before, chemistry and nuclear differ by about a factor of a million. Well, not so if you're talking about weak gammas versus heavy elements K-shell transitions or their K-edges. Let's see, the largest X-ray you'd expect would be the K-edge at 142 keV. And the technetium-99 gamma ray comes out of 140 keV. How do you know if it's a gamma or an X-ray? You don't. Unless you have really, really good energy resolution and you can tell them apart. Yeah? AUDIENCE: This formula in this chart is only for caluclating energy of X-rays, right? MICHAEL SHORT: Correct. This-- AUDIENCE: [INAUDIBLE] for gammas. MICHAEL SHORT: Things get quantum. So the question was, this formula and this chart, yes, this is only for X-rays and electron shells. There are probably equivalent calculations for nuclear energy levels. I will say that's a 22.02 and far beyond topic. For the nuclear energy levels, just use the decay diagrams to find those. Yeah. The table of nuclides and all their different diagrams. Cool. How far do we go here? LN-- yep, there's no O's. So they never talk about anything beyond shell level 4, even for fermium. So ha, I stand corrected. OK. Cool. So what I wanted to show you quickly is is that series of hydrogen emission lines. So how familiar does this look to folks? Where you can have a transition from level 3 to level 2, level 4 to level 2, and you can actually-- this is a kind of neat thing to verify. I don't it as a problem set question because it's not very nuclear, but you can try this on your own and verify that you can actually calculate the expected wavelength of these photons coming off of excited hydrogen. Also notice here, it goes all the way out to 9 and out to infinity, because this is electronic excitation. You won't usually get the ejection of anything beyond an M or an N electron even in the largest elements from, let's say, from IC-- what is it? From internal conversion. But you can electronically excite them to whatever energy level to the point of even ionizing them. That's where the infinity comes in. Cool. So it's like two of five of. So I want to open this up to any questions about decay before we move upstairs to talk about activity, half-life, and series radioactive decay, which is what nuclear activation analysis is all about. So anything here? Yep? AUDIENCE: Just making sure I understand this. So the H-alpha lines transition from N equals to 3 to N equals 2. MICHAEL SHORT: Mm-hmm. AUDIENCE: Would you call that-- in our previous notation, would you call that LM transition? MICHAEL SHORT: Correct. In our pre-- in our other notation, this would be known as an LM electron. And probably L1M1 because there is only one electron in hydrogen. Yep. So don't let the notations trip you up. As long as you-- I'm sure someone's got a chart of L equals 2 equals whatever other Greek letter someone has designated it for. There's just different ways of saying the same thing. So as long as you know the physics, looking up the notation is just kind of a little pain. Any other questions on radioactive decay or competing mechanisms? Cool. Let's take a 10 minute break. So I'll see you guys upstairs in Room 307 in 10 minutes. There's no projector there, so we'll do it all on the board. All right. So I want to start off the second half of today's class by posing and answering a question. Who's still mentally having trouble grasping this idea? What did I tell you? Yeah. So whoever I said it to, I said at least half the class is right there with you, it's true. So in a sentence, it's this-- Q is the conversion of mass to energy. That's all. And the whole point of doing this nuclear reaction energetics to find out if things are or aren't allowed, if they're exo or endothermic is to see how much mass is converted to energy or how much energy has to be converted to mass. And if you have trouble remembering, just go back to the equation that I see on everybody's T-shirts. And like I said on the first day of class, everyone's got it on their shirts and no one quite understands it, not even the nuclear engineers. Because it's very difficult mentally to grasp the idea that energy and matter are two sides of the same coin or two different forms of the same thing. So for a nuclear reaction where Q is greater than 0 or exothermic, all that means is that energy is spontaneously created from the destruction of mass. That's all. And for a Q less than 0 reaction or endothermic, when you inject energy into the system, it is absorbed and mass is created straight from this equation. So does this make more sense to the folks that raised their hands, which is almost everybody? Q is nothing but a quantification of the amount of matter and energy that turn from one to the other. And all the balance stuff we've been doing for almost the last month has been to serve, to quantify, and to predict it. So I hope that helps. I know that MIT students have this gift for being able to hide behind the math, and I know that's true because I used to be one myself. And you could get through the day or get through the class getting the math right without really understanding the physics or the mechanism behind what's going on. So everything we've done for the last three weeks can be summed up in one sentence-- mass and energy are the same thing. We've just had a lot of math to get there and be able to-- yeah, it still kind of makes your head want to explode, right? If you think about it, if you make an-- we want to make an endothermic reaction happen, you have to put kinetic energy into one of the particles. And that system in the just barely-allowed state, the nuclei won't really be moving, or at least the-- yeah. The nuclei won't really be moving afterwards, because you'll have turned energy into matter. That kinetic energy is turned into mass energy. Just like you can turn potential into kinetic or thermal into mechanical or vice versa, it's other forms of energy. Kind of mind-blowing. Well anyway, so now that we finished radioactive decay, I want to get into the concept of activity half-life, and then we're going to start but not finish serial radioactive creation and destruction. In your readings, you'll see an equation that looks something like this where we're going to be at the end of the day today. If you want to show off how much of some isotope N1 exists as a function of time, you may have seen something that looks like this. We're going to be able to understand how these equations are created, and because this is MIT, we're going to take it further and add specifically driven mechanisms, like can you create an isotope not just by decay from one isotope to another, but by intentionally making it? Which is exactly how NAA Nuclear Activation Analysis works. You fire neutrons into a material. They turn into something else and start decaying in series. This is what we're going to be working out the math for, but I want to make sure at every step that we get the physics right. So first, a very quick primer on where-- what is activity and where does half-life come from? So we define the A, the activity of a substance as pretty simple. It depends on the amount of the substance that's there, and it depends on what's called this decay constant. So this amount-- let's just say like atoms. And this decay constant is in units of 1 over second. An activity is in, let's say, atoms destroyed or decays per second. That thing right there is called the decay constant, lambda. And if we want to see, let's say, how much of a substance is decaying at a certain time or what's its activity and get a measure of how quickly does it take to decay away, we can start by saying, well this in effect is a destruction rate of isotope N. So we can make the simplest of differential equations and say, the amount of substance N as a function of time is just minus its activity, which equals minus lambda n. I hope I don't have to explain how to solve this differential equation, so I'm going to go through it pretty quickly. What's the method you use for this to get n as a function of t? AUDIENCE: 3? MICHAEL SHORT: Yes. You're all in 18-03 or have finished 18-03, right? This should have been day 1. So we'll just separate variables, divide each side by N, multiply side by dt. So we get dN over N equals negative lambda dt. You can integrate both sides, and we get the natural log of N equals minus lambda t plus some integration constant. We're going to do a little bit of trickery, and to make things in a nice form that we can deal with, let's just call this log of N0. They're just numbers, right? We haven't defined what this constant of integration is. Everyone cool with us doing that? So then we can subtract log of N0 from each side. And we get log N minus log of N0 equals minus lambda t. So this is like saying log of N over N0 equals minus lambda t. We'll take either the power of both sides and we get N over and not equals e to the minus lambda t. And right there you've got your exponential decay equation. This is the easy part. And what this tells you is it is a larger decay constant. Well let me ask you a question, then. Would a larger decay constant mean a faster or a slower decaying isotope? A larger decay constant-- correct-- means that you've got more of these decays happening per second. So a larger lambda means faster decay. And we also can define a quantity called the half-life, which means N at t 1/2 equals 1/2 N0. So for that, all you have to do is plug in t 1/2 for t, and 1/2 N0 for N0, and you actually get this relation where the half-life is just log of 2 over lambda, better known as 0.693. But we'll just leave it as log of 2 for exactness. And that's all there really is for decay in half-life. So I'll pose another question to you. Something with a larger decay constant, will it have a larger or a smaller half-life? AUDIENCE: Smaller? MICHAEL SHORT: Smaller, because they're inversely related. So I'd say from this quick derivation, these are the two things to note. Is that something with a larger decay constant decays faster and therefore has a shorter half-life. So when we were separating out our isotopes in nuclear activation analysis into what Mike called the shorts and the longs, he was separating them by half-life to say that-- let's say the same amount of activation or the same amount of creation, the ones with shorter half-lives will be hotter, more radioactive, but for less time. And so what we're going to be doing is what's called short nuclear activation analysis because we don't want to count for like days or weeks or months. Yep? AUDIENCE: So is half-life-- is the decay constant just a property of the given substance, given element? MICHAEL SHORT: The decay constant is a property of the given isotope specific to that type of decay. So if we were to draw that generalized radioactivity diagram-- let's say we have potassium-40, which can either go by beta decay to-- what comes in beta-- what comes after-- I think it's calcium-40. Or it can go by positron or electron capture to argon-40. Each of these processes has a different half-life. And then in that chain-- remember, let's do the americium-241, and I'm going to channel my one-year-old son in drawing this decay diagram. It looks something like that with all sorts of transitions. I didn't scream like he usually does, but whatever. We're on camera. Each of these decay-- each of these transitions may also have its own half-life. So between isometric transitions, they're usually very, very fast, but once in a while they're not. Like technetium-99 metastable to technetium-99 has a half-life of around six days, which is why it's so useful as a medical isotope. So when you see something marked M for metastable, all that means is there is some sort of a gamma, also known as an IT transition, with a particularly long half-life. Everyone clear on what all that means? Cool. Well, there'll be some time for it to sink in because with these definitions in hand, I want to pose a problem to you guys. Let's say I start off with some amount of an isotope N1. And it decays by some mechanism-- we don't care what-- to isotope N2, and it decays to some isotope N3 with decay constants lambda 1 and lambda 2. This is what we call serial radioactive decay. How do we construct a system of equations to tell us what is N1 as a function of time, what is N2 as a function of time, and what is N3 as a function of time? Where do we begin in a general sense? OK, so let's start with N1. We kind of have an expression for N1 already, but let's start out with a differential equation. So the form of all of these equations for everything serial radioactive decay and burning isotopes in a reactor and creating isotopes in a reactor is going to take the following form. The general equation is simple. The change equals creation minus destruction. Or the simple thing is let's say the change is a source minus a sink. And we'll have to come up for every one of these isotopes for a mathematical way to describe what's the source and what's the sink. So if we want to measure the change in N1 as a function of time, what are the sources of isotope N1? Yeah? AUDIENCE: Isn't there no sources? MICHAEL SHORT: No sources. We're starting off with some fixed quantity of N1. Let's just call it N1,0. But you're right, there's no continuous source of isotope N1. What about its destruction? AUDIENCE: Decay to N2? MICHAEL SHORT: Yeah. Decay to N2. So we've got the equation for that right there. it depends on the decay constant of number 1 and the amount of number 1. So what we're doing here-- I love how this course is timed with 18-03 because you're learning ordinary differential equations in 18-03, and we're going to be solving everyday ordinary differential equations for a fair bit of this course. So it's one of those rare times you get to like learn math and put it to use at the same time instead of six years later, it's just kind of nice. So that's easy. Let's go to the more challenging one. What is the source of isotope N2? AUDIENCE: Decay from N1? MICHAEL SHORT: Decay from N1. So how would I mathematically write that? AUDIENCE: Just lambda 1 N1. MICHAEL SHORT: Just lambda 1 N1. And what is the destruction of isotope N2? Anyone else? AUDIENCE: Lambda 2 N2? MICHAEL SHORT: Takes the same form. It depends on the decay concept of isotope 2 and the amount of isotope 2 that's there. How about isotope 3? Where does that come from? AUDIENCE: Lambda 2 N2? MICHAEL SHORT: That's it. The source is lambda 2 N2. What are the sinks or the destruction? AUDIENCE: Nothing. MICHAEL SHORT: Nothing. So we have a very simple set of posed differential equations to describe the production and the destruction of these three isotopes. So let's imagine now that N1 was, let's say, radium, which exists all throughout the soil and rocks; N2 is radon, the gas that's produced from radium decay; and then N3 could be let's say one of the stable daughter products of radon. So these are the sorts of calculations that are done all the time in real life to see-- if you know how much radium is in the rock, how much radon do you expect to breathe in? Because at the same time you're producing radon from radium decay. And the radon is decaying itself. So you can't just say, oh, the activity of the radium equals the amount of radon because it's being created and destroyed all at the same time, and it depends on how much there is around. Same thing with nuclear activation analysis. You'll, let's say, you'd start off with sodium-21, you can create sodium-22. Sodium-22 will decay by positron emission. What comes before sodium? Probably neon, I think, 22. That's my guess. Yeah. So if you want to say how much sodium-22 is there, well you're both creating it from sodium-21 from neutron absorption, and you're decaying it naturally by positron emission among other processes. We're going to get back into how do we deal with the neutrons thing probably on Tuesday, But for now, let's work on solving this system of equations. So I think N1 is pretty easy because we already have the solution for it. So I'll just write it. 1t. The harder one is N2. So what can we start by doing? We've got an ordinary differential equation with-- it's just this first order, but there's two variables. So how do we deal? AUDIENCE: Pick another equation? MICHAEL SHORT: We do it-- well, we have other equations, so-- actually, we've got this one. Yes. Substitute N1 in here so we get everything in terms of N2 and constants and type. So we'll rewrite this equation as dN2/dt, which I'm also just going to write as N2 prime. And when you use this a little bit of notation that I'll leave up there on the board because it's going to be a lot faster in writing. Which equals lambda 1 and 10e to the minus lambda 1t minus lambda 2 N2. Can we separate variables here? I don't see an easy way. So, 18-03 experts, how do we solve this type of first order differential equation? And I'll give you-- l won't give you a hint, I'll give you a little bit of consolation. No one from last year knew how to approach this. So if you don't know, I won't be disappointed. Yeah? AUDIENCE: You've got N2 dot plus lambda 2 N2 equals lambda 1 N1 dot-- MICHAEL SHORT: Mm-hmm. AUDIENCE: So you could put-- add lambda 2 N2 to both sides. MICHAEL SHORT: Add lambda 2 N2 to the both sides. I think you're on to what I'm thinking about. Also, if you can't read something I write, please stop me and let me know and I'll be happy to erase it. I don't think I've said that yet, but it takes a lot of control for me to get my handwriting legible on the board let alone on a piece of paper. So if you can't read, please let me know. OK. So now we've got the N2's separate from the N1's, what do we do next? AUDIENCE: --N2 has the form A times e to the negative lambda 1t? MICHAEL SHORT: OK, let's try this. Assume N2 has the form e to the what? AUDIENCE: Negative lambda 1t? MICHAEL SHORT: Negative-- has the form e to the lambda negative 1t. AUDIENCE: With that A in front-- MICHAEL SHORT: With an A, some constant in front. What makes you say that? AUDIENCE: Like if you take the derivative with respect to time, then the next term will still have a e to the negative lambda 1t? MICHAEL SHORT: Mm-hmm. AUDIENCE: Cancel all those out and solve for A. MICHAEL SHORT: Cool. So that is one way to do it. It's going to get a little messy, though. There's another method specifically for equations of the form. Let's call it y prime plus-- I'm going to use notation they may have used in 18-03. I hear a couple of aha. Anything look familiar about this type of equation? OK, what is it? AUDIENCE: Not following. [LAUGHTER] MICHAEL SHORT: It's not that hard. Anyone remember the word integrating factor? And it was probably done horribly and on like six math boards or whatever. So I'm going to show you the simpler way to do this. The idea here is we want to multiply everything by something-- by some function mu. Put a mu there, put a mu there, put a mu there. I think that's the notation that's usually used in differential equations, such that this thing right here is shrinkable through the product rule. Or the product rule-- I'm not assuming everyone remembers-- says that let's say you have some function a of t, b of t prime is like a prime times b plus a times b prime. So we're kind of getting around to the method that Luke was talking about, but we're going to do it by a little bit of a cleaner way. We multiply every term by some function mu such that this part is one of these perfect product rules at which point we can shrink and integrate the expression. Without going through the derivation of how integrating factors are done, I'll just let you know that this function mu ends up being e to the integral of p. Hat is our integrating factor. So that's the end result of what was probably six boards of 18-03. Am I right or am I mistaken? Well, things haven't changed in 15 years. Cool. OK, so what is mu for this equation? Luckily, p of t is pretty simple. Which part of this equation right here is our p of t-like term? AUDIENCE: Lambda 2 N2. MICHAEL SHORT: Actually, just lambda 2. Because we've got our variable right here that right there is our p of t. So we'll just say that mu equals e to the integral of-- uh, yep-- of lambda 2 dt, which is just e to the lambda 2t. That's our integrating factor right there. So we'll multiply every term right here by that. So we'll say e to the lambda 2t times and N2 prime plus lambda 2 e to the lambda 2t. Anyone see what is going on here? There's the product rule thing going on-- times N2 plus e to the lambda 2t times lambda 1 N10 e to the minus lambda 1t equals 0. And we have successfully created something here that can be shrunk up with a product rule. Yeah? AUDIENCE: Should that be a minus e to the lambda 2t lambda 1 N1-- when you moved it over? It's positive-- MICHAEL SHORT: Oh yeah. There's an equal sign there, isn't there? That's what tripped me up. Thank you. So there is indeed a minus sign there because I skipped the step putting everything on one side of the equation. Yep? AUDIENCE: Could we do this with variational parameters instead? MICHAEL SHORT: Where you replace one variable with-- or replace a couple of variables with another one? AUDIENCE: Well yeah, just like the homogeneous solution, and then you'll find like a factor-- MICHAEL SHORT: Yeah. There are lots of ways of solving a first order ODE like this. Sure. So this would work with various parameters. It would work with what Luke's talking about. It works with this one. This just happened to be a particularly simple one because the integrating factor's so simple. So let's cram this up right here. So this is like saying N2 times e to the lambda 2t prime minus-- and we've got two e to the somethings that we can combine right here. So I'll just say lambda 1 and 10e to the lambda 2 minus lambda 1t equals 0. So now I will put this term back on the other side by doing that. And now we just integrate both sides. And we get N2 e to the lambda 2t equals, let's see. Becomes lambda 1 N10 over lambda 2 minus lambda 1 e to the lambda 2 minus lambda 1t plus some integration constant C. And in this case, our initial condition, well, how much of isotope N2 did we start with? Have we specified that yet? No? Let's make it simple. Let's assume that the initial amount of isotope N2 equals 0. We put some isotope in the reactor or start it off with some amount of isotope like radium. Didn't start off with-- it didn't start off with any radon, and just kept going. So then all we have to do is divide each side by either the lambda 2t, and that cancels those, that cancels that and that, and we end up with N2 as a function of time equals lambda 1 N1,0 over lambda 2 minus lambda 1 times e to the minus lambda 1t. OK. And we've got an expression for N2. How about N3? Do we even have to solve this one? I see a couple of people shaking their heads no, why is that? AUDIENCE: Basically you already solved for N1, they're just not minus signs. MICHAEL SHORT: Well, not quite. Because we-- well yeah, I guess we've kind of solved it for N1, but now we take this expression for N2, stick it in here, and then solve that, it's going to get messy. So I'm going to show you something mathematically now that I'll show you graphically later. There's a conservation equation that we're missing here. If we sum up isotopes N1, N2, N3, equals what? Conserving total number of atoms. AUDIENCE: N1,0? MICHAEL SHORT: Exactly. N1,0. In this situation where we started off with some known quantity of isotope 1 only, you can't change the number of atoms here, you can only change the type of atoms. So we don't have to solve for E3-- oh sorry, for N3, because N3 is just N1,0 minus N1 minus N2. And that takes like an extra 10 minutes out of today's lecture. So later on when we have a projector on Tuesday, I will show you these equations graphed out where I've-- actually, I'll share this with you. Did I tell you guys about the Desmos graphical calculator? Or have I shown this to you yet? Go here for all of your graphing needs. It's free, and the best part that I like is that anytime you define some parameter, it automatically makes a slider bar so you can play with the equations. And you can just-- say, like, well what if L1 and L-- lambda 1 and L2 are equal? What if they were way different? And it just graphs the solutions for you. It's pretty useful. So I'll show you some of that on Tuesday when we actually have a screen. Let me see what time it is. Oh sweet, we've got plenty of time. So now I want to pose the following questions to you guys. I'm going to erase stuff from here because we still have some space. How do we model nuclear activation analysis using this kind of equation? We'll start off with the same equation. So let's say we'll have a minus lambda 1 N1, we'll have N2, minus lambda 2 N2 minus something. N3 equals something-- let's see, there's a lambda 1 N1, there's a lambda 2 N2 minus something. I've left some trailing minus signs to indicate that we don't have complete equations for this yet. So for the case of nuclear activation analysis where we have some imposed flux-- flux-- of neutrons. So anyone remember from some of our previous flash-forwards how do we turn these into creation and destruction rates of these different isotopes? AUDIENCE: Could you repeat that? MICHAEL SHORT: Yep. So let's say we've now stuck N1 in the reactor. And we're now using the reactor to create different isotopes like N2 and N3, but at the same time they're in the reactor, they're getting cooked as well by some imposed flux of neutrons. How do we set up and not solve the system of equations to describe this? AUDIENCE: Are they-- are N2 and N3 both getting like-- MICHAEL SHORT: They're getting destroyed and whatnot? AUDIENCE: Are they just like decaying or are they also getting like added stuff from neutrons-- MICHAEL SHORT: Well-- AUDIENCE: Because that depends on the isotope. MICHAEL SHORT: That depends on the isotope. So let's define what the system-- this system is. Let's say we stuck in some other isotope N0, and we put it in, and we're going to have to say it's-- if we have N0 prime, some minus some creation term. And in this case, N0 can absorb a neutron to become N1. N1 is decaying to N2, N2 is decaying to N3. But also, N1 can be burned by neutrons N2 can be burned by neutrons and N3 can be burned by neutrons. So here I've given you kind of a simplistic situation that doesn't usually exist. Here I've given you a situation that you could replicate in the reactor. How do we model this nuclear activation analysis process? Well first of all, what's the creation rate of N0, the stuff we put in the reactor? Are we creating any? No? Luke? AUDIENCE: It's all going to be created if N2 absorbs the neutron and [INAUDIBLE]? MICHAEL SHORT: It could be. So if we had this for the following nuclear reaction, now it's getting crazy. We can model that, too. Let's do it. OK, I was going to say no, but let's do it. We can-- so what I'm trying to do here is give you the mathematical tools to model any real physical situation. Usually in this class like when I took it, the discussion stopped here and we got to start looking at different graphs of secular versus transient equilibrium like in the reading. But I want you guys to have the intuition to say, all right, let's take any crazy decay diagram, right? And N3 becomes N1, let's just go nuts. How do we set up the differential equations for this assuming that computers can solve them? In every case-- where's my long pointer? Go back to this here. The change is the creation minus the destruction. So what are all the creation sources in our new scenario for N0? Well I pose you a simpler question. If there is no isotope N2, can you create any N0? No. Because the only way to make N0 is to start with N2. So we know its creation term is going to have an N2 in it. What else does it depend on? AUDIENCE: Cross-section. MICHAEL SHORT: I heard both of the pieces of the answer correct at the same time. It depends on the flux of neutrons, and it depends on the cross-section. This macroscopic cross-section right here. OK? Or I'm sorry, no, no. It depends on the microscopic cross-section because we have an amount of N2. But it does depend on how many neutrons you throw at it and what is the probability of each of those neutrons make some N0. So what we've got right here is a reaction rate. So who remembers from the like second or third lecture, we said a reaction rate can be expressed like macroscopic cross-section times of flux, which is the same as a microscopic cross-section times number density times flux? But this is better known as a macroscopic cross-section. Remember, I kind of showed this to you very briefly when we talk about cross-sections, now is where we actually use them. So the cross-section's like the probability that a neutron coming in to atom N is going to react with it. The macroscopic cross-section in units of, let's say, 1 over centimeters is the total-- let's say the total probability accounting for how many are there, and the flux is in neutrons per centimeter squared per second. Combine these together, and you get a reaction rate in atoms per centimeter cubed per second, a volumetric reaction rate. There we go. So N1 can be created. Let's give this cross-section a designation from N2 to N0. So let's call it cross-section 2,0. How can N not be destroyed? I'll give you a hint, it looks very similar to this term. Anyone want to take a guess? Yeah? AUDIENCE: Could it undergo fission? MICHAEL SHORT: We haven't specified that. I'm going to cut the craziness at there, I think. But just look at the reactions right here. N0 can absorb a neutron and become N1. So how do we mathematically write that? AUDIENCE: Minus the cross-section N-- MICHAEL SHORT: Yep. Minus the cross-section of? AUDIENCE: N1. MICHAEL SHORT: Let's say 0 going to 1. Let's just call it that. Times the neutron flux. Times what? AUDIENCE: Times N0? MICHAEL SHORT: Times the amount that's there, N0. So this is your destruction term. Using this pattern, we can fill in all the remaining terms for all the remaining isotopes. So what are the creation mechanisms for isotope N1? Well, just follow the arrows. AUDIENCE: --same term-- MICHAEL SHORT: Mm-hmm. AUDIENCE: --on the first equation for N-- MICHAEL SHORT: Yep. There's this one right here. We can have-- AUDIENCE: --the sign. MICHAEL SHORT: But flip the sign because it's creation. So sigma 0,1 flux N0. And what else can create N1 because we're just going crazy today? N3 can create N1 because we said so. But-- yeah, because we said so. So now we'll say also we'll have this cross-section for 3 turning into 1 times flux times N3. Minus the decay of N1 using our activity expression, minus what else? AUDIENCE: [INAUDIBLE] cross-section of the [INAUDIBLE]? MICHAEL SHORT: I heard some-- yep, that will be the cross-section of N1. Let's call it going to some isotope we don't care about times the flux times N1. So as long as you can draw a like arrow decay and destruction production diagram, we can put this to math. That's the crazy thing. Let's finish it up. How about N2? We know that N1 can decay to N2. What are the other production mechanisms for N2? Follow the arrows. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: That's it. Because we're not changing anything at this point. What are all the destruction mechanisms for N2? AUDIENCE: Decay or [INAUDIBLE] MICHAEL SHORT: Yep. It can decay, it can be absorbed by a neutron to become something we don't care about. Let's see, let's call it cross-section 2 null times flux times N2 minus the cross-section from 2 to 0, just this term with a minus sign. How about N3? What are all the ways we can make N3? Could you say it a little louder? AUDIENCE: Only from decay. MICHAEL SHORT: Only from decay. Again, just see which arrows are pointing at it. And what about the destruction mechanism for N3? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yep. So there's some probability it decays, let's say, cross-section 3 null times flux times N3 minus this arrow going back to N1. Running out of space, too many units. From 3 to 1 flux N3. So the point of accepting the escalation of this problem into something crazy is that it doesn't matter how crazy it gets. As long as you have like an arrow-based diagram or a flow chart to say which isotopes become which other isotopes by which other means, you can pose and correctly write the set of equations that defines them. This is when I would bring in MATLAB or Mathematica. I could make you do this analytically, but this isn't a course 18 class and-- yeah, we don't want to go there. Cool. So I think-- I guess it's probably getting towards 10 of 10 of. Close enough. It's like three of 10 of. So I'd like to open it up to any questions because we let this-- I let this escalate freely to prove the point that as long as you know what decays into what or what creates or destroys what, you can set up the equations correctly. What we'll be doing on Tuesday is graphing this. Where we can pose an arbitrarily complex set of equations and you can start looking at, well, the change in one depends on the amount of the other, and you can almost graphically solve this on paper. Forget Mathematica in MATLAB. If you look at last year's exam, I actually posed a more complex set of these and said draw the solution. And I'm going to-- we're going to show you how to do that. But any question on how we formed these? Yep? AUDIENCE: What is sigma again? I'm sorry. MICHAEL SHORT: So we just said let's say N3 decays to some isotope we don't care about. That's how I initially had it, and then I think Luke said, well can N2 become N0? And I said, yeah, sure. So there can be a cross-section for every type of reaction. So in reality, you might have any reaction under the sun, right? One isotope could absorb a neutron and decay by like any three or four different mechanisms into something else. You may have different probabilities for each of these. Yeah? AUDIENCE: Do those decay constants [INAUDIBLE] MICHAEL SHORT: The only time I'd expect you to find these decay constants is if I told you what these isotopes were. Those are also listed on the table of nuclides. As in they give you the half-life, and you know from this half-life relation what the decay constant is. If I didn't tell you what these isotopes were, I would just have you keep these symbols as lambda 1 and lambda 2. And I might pose a question which will solve graphically on Tuesday, like let's say, solve this set of equations for lambda 2 is much greater than lambda 1. I don't care what the numbers are, let's just look at that general relation. And all the graphs for that sort of situation will follow the same pattern. Cool. Any other questions on how we constructed this set of differential equations? And know that I'll never ask you to solve them numerically or analytically. Yeah. That's why we have computers and this is the future. These I might expect you to know how to derive, but this is the simplest possible case. All right, if there's no questions, then let's take another 10-minute break and I'll be here for recitation to go over whatever problems you guys would like to do. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 17_IonNuclear_Interactions_I_Scattering_and_Stopping_Power_Derivation_Ion_Range.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: I wanted to do a quick review of all the photon interactions, because I've released problems at 5:00 for you guys. It involves your banana data. So we're going to be taking a second look at all the banana data and the problem statement for the lab part is simple. Identify all the peaks. Tell me where they came from. And tell me all the peaks that should be there that you don't see and why. And that's like a quarter of the problem set or something. So just to review the three effects that we talked about sort of in order of what energies they're important in, my spelling I'm sure will be slower than usual today. I spent like 3 and 1/2 days listening to Russian presentations with an English translator microphone. Russian scientific presentation is really similar to English. All the technical words are the same, but as soon as you start trying to talk to a two-year-old, you're just lost. But it's pretty cool. So we went over the photoelectric effect, Compton scattering, and pair production. And so in addition to knowing what these three mechanisms actually are and how to tell what they would look like on a given detector spectrum, the other two important things we wanted to remember are what are the cross-sections, so what are the relative probabilities of each happening as a function of their energy the photon and the material that they're going in and then filling in this map of if you have energy and z, where are these effects most prevalent? Does anyone want to kick me off? Does anyone remember the general form of the cross-sections for any of these effects? Or does anyone remember what this map looks like? Yeah? Chris. STUDENT: [INAUDIBLE] PROFESSOR: Indeed. STUDENT: [INAUDIBLE] PROFESSOR: That's right. So pair production-- I think we gave it the symbol kappa to go along with a reading that you guys have-- was around here. That's because it's not going to happen below 1.022 MeV, because you need the energy to make the positron electron pair. And indeed, the more electrons there are in each atom, the more likely pair production is, this happens when the photon gets near the nucleus, which is going to have a higher charge for higher z and so on. And so pair production, it was proportional to-- you'll never need to know the exact things that the cross-sections. That's what books are for. But it was proportional to-- let's see, I think it was like z to the third or fourth, pretty strong like that. What about Compton scattering? Where does that lie on this map? STUDENT: [INAUDIBLE] PROFESSOR: Yup, high z, low energy. So in this general region. We'll just give it a C for Compton scattering. And in that one, the cross-section was proportional to something like 1 over the energy or h bar omega in your reading. That's the same thing as saying photon energy. And then what about photoelectric effect? Well, that's the only place there is left, right? We'll give it the symbol tau. So I'll put these up here. I don't quite know why they chose those symbols. But I'll just stick to the notation in the reading. And then the idea here is this was proportional to something like z to the fifth over-- what is it, like energy to the like 7/2. So significantly low energy, significantly high z. And does anyone remember at what energy does the photoelectric effect start to kick in? Very close to zero. So here, the energy has got to be greater than or equal to, but what is the photoelectric effect physically? STUDENT: [INAUDIBLE] PROFESSOR: Yep. A gamma gets absorbed, or any photon gets absorbed that knocks out an electron. So how energetic does it have to be to knock out the electron? STUDENT: Binding energy of the electrons. PROFESSOR: The binding energy of the lowest bound electron, which we give that symbol phi or the work function. The idea here is that as soon as you have enough energy to eject the outermost electron, which is super low for the alkali metals, like sodium, potassium, cesium, then you can exceed the work function and get the photoelectric effect going. And for this one, we said the energy here has to be greater than or equal to 2 times the rest mass of the electron c squared, better known as 1.022 MeV. Is there a minimum energy for Compton scattering? Photons can scatter. They don't have to have any energy to scatter. Certainly. And let's see the two interesting bits of technology we talked about related to these, one was called a Compton camera, where you could actually use two detectors. Let's say you're looking for a tiny source in a big box somewhere. You can have one detector. And you can have a second detector, so that this source is sending out gammas in all directions. And let's say one of them interacts with detector one, bounces off, and interacts in detector two. At that point, you've constrained sort of the angle between these detectors, so that you know what energy the gamma came from. And you know generally where it came from physically, which is a cool piece of equipment. I'm going to try to find pictures of one of these actual things, because I actually haven't seen one myself. I've just heard it described physically and it seems to make sense. And the second one that we touched upon at the very end of last class had to do with this thing right here. Does anyone remember thermionic devices? Well, the work function for some materials, like for cesium, the work function is a little less than an eV. It's like 0.7 electron volts, which means when you get things up to about 2,000 Celsius or so, the temperature of the atoms themselves exceeds the work function, and the outer electrons just boil off. So if you have two pieces of material, probably in a vacuum, and one of them is like 2,000 C, and one of them is, let's say, room temperature, you end up with this net flux of electrons boiling off the hot one to the cold one. And this has been one of the methods proposed to directly convert heat to electricity for ultra high temperature applications, like space reactors or other things that can get super crazy hot. So it's one of those energy conversion mechanisms-- did anyone ever hear about this one in high school? Highly, highly doubt that it would ever be mentioned. One of the professors in our department, Elias Gyftopoulos was one of the folks that came up with this whole idea. And in my senior design course, we actually designed a space reactor that uses thermionics, and he showed up in the audience by surprise. And that's probably the dumbest I've ever looked at a presentation, explaining something that someone invented. They knew every single mistake and everything that was wrong. So since then I've kind of boned up on thermionics knowledge. But that's enough for the photon stuff. Now we want to start getting into ion nuclear interactions and in today's reading, it started off-- I think the first paragraph went something like, the formula for stopping power can be expressed as follows. Squared times log of-- squared over mean ionization energy. And I find this explanation to be unsatisfactory. I'm not a fan of the kind of books that just say, here's a formula. For practice, plug things in and use them. So instead, I'm going to skip ahead to a little bit of next week's reading or the [INAUDIBLE] reading and actually derive it. So when I just throw up a formula like this, it's like how the hell do you remember that, right? Well, it's going to make a lot more sense once we actually derive it. So let's set up this problem. You have a charged particle with charge little z times e. Little z, we'll say, is the number of protons in this nucleus or the charge on an electron if you want it, times the unit charge of an electron. And it's firing at some other electron somewhere else in the material. So the basis of any sort of ion electron interaction has to start with the ion being either struck or repelled-- I'm sorry, with the electron being struck a repelled by the ion. And so let's say that this ion exists. We'll draw kind of a unit cylinder around this physical situation. And if we draw this distance right here, it's one of those rare cases where the nomenclature is the same in pretty much every reading. There's this distance b, which we call the impact parameter. It's kind of a funny name for it, but it just means by how close does your particle get to that electron when it undergoes this single interaction. And so this particle is moving quite fast with some speed v towards and then away from this electron. And what's going to happen is there's going to be some sort of a Coulomb force between this charged particle and this electron. So let's say you're firing an electron at an electron. There's going to be some negative repulsion. Or if you're firing an ion at an electron, there might be some positive attraction. But at any rate, there's going to be some deflection. So let's just say it's a negatively charged particle. If we draw its actual trajectory, it's actually going to go off kind of barely in that direction, right? Two charges passing in the night. They know each other's there. And they kind of repel each other. So what we want to do is figure out what is the total amount of force in the x and the y direction. Let's just define our axes to make sure we're all on the same page. And can we resolve that into a total amount of energy lost per unit distance? This quantity right here was for referred to as stopping power. Before we launch into it, does anyone know why I put a negative sign on this quantity? STUDENT: This is all [INAUDIBLE].. PROFESSOR: Exactly. Yep. If you're changing the energy in the particle, unless it's, let's say, following it to some gravitational field, which we're not covering today or ever then, it's any sort of interaction is going to cause the particle to lose some energy. So this quantity right here is going to be negative, and so this quantity right here is going to be positive. We stick a minus sign in front of it. But let's get back to the basics then. What is the force between this charged particle and the electron from 802? This Coulomb force. STUDENT: It's a constant. PROFESSOR: There is a constant. Let's just call it k0, because the reading calls it k0. STUDENT: And then it'd be z e [INAUDIBLE].. PROFESSOR: Yes. Yeah. So this is like your q1 and your q1, right? Your charge 1 and your charge 2 over? STUDENT: The distance. PROFESSOR: The distance squared. Let's call that the distance between the two particles. And so now we can say if this is the distance away in the x direction, then we know that r is root x squared plus b squared. So we can stick that in there, and we know that our Coulomb force would then be this k0 little z e squared over root-- I'm sorry not square root, just x squared plus b squared. So like we've done with everything so far in the class-- it's kind of dark in the back. Like we've done with everything in the class, let's split this up into x and y-forces. So if we assume that the electron basically doesn't move, what's the net amount of force in the x direction that this particle is going to feel when it goes from minus infinity, so over here, to plus infinity like over here. STUDENT: Zero. PROFESSOR: Zero. Why do you say that? STUDENT: Because [INAUDIBLE]. PROFESSOR: Exactly. Whatever force it feels repelling it from here, as soon as it hits this midpoint, it gets that same amount of propulsion in the other direction. So your net force, if you integrate from minus infinity to infinity, of your x force as a function of t, that comes out to zero, which makes our life a lot easier. All we have to worry about is the total integral of the y force to figure out how much net deflection do we get in that direction. This integral is also better known as a momentum. Anyone recognize where with this comes from? If you take the integral of a force, it's like the integral of a mass times an acceleration, which is like mass times the integral of acceleration, which is like mv, which is a momentum this is where some of our particle wave stuff is going to get funky, because we're going to start throwing in expressions for particle momentums in wave equations when we start to determine, well, if this is really an electron. There's some limitations on how we can treat it, where it kind of loses its character as a particle. So I just want to warn you that that's coming up. So now, let's make an expression for the y force. If we were to say what is the y momentum imparted, which is an integral of the y component of the force dt, we already have the expression for the force, like you guys derived. K0 time as little ze times e over r squared, which is x squared plus b squared. And then how do we get the y component of it? Well, we've got to define an angle. That's our angle theta. What's the y component of that force? STUDENT: [INAUDIBLE] PROFESSOR: Yeah, it's just times cosine theta dt. What's the expression for cosine theta in this physical situation? STUDENT: [INAUDIBLE] PROFESSOR: Close. b over r. And in this case, r is root x squared plus b squared. And the last thing we want is because we have things in terms of x and b, b's a constant, x is a variable, t's kind of the wrong variable. So we can do a variable change and say this is equivalent to the velocity of the particle over-- I'm sorry-- to dx over v. We're just using this whole like velocity equals, what is it, distance times time, so our whole, what is it-- yeah. STUDENT: [INAUDIBLE] PROFESSOR: Thank you. Distance equals velocity times. Thinks, [INAUDIBLE]. OK, anyway. So luckily, I had the expression right and the explanation wrong. So thank you. Was that Luke or Jared's voice? Awesome. OK. So let's put this whole expression in, keep that little embarrassment behind us. We have the integral from negative to plus infinity of k0 little z e squared times b over x squared plus b squared times the square root of itself. So let's just say x squared plus b squared to the 3/2, and there's a v on the bottom dx. This is finally valuable. So we're getting closer. Let's take all the constants and shove them outside the integral. So we have a k0 z squared eV over b squared b over velocity times the integral of just 1 over x squared plus b squared to the 3/2 dx. Not remembering the formula off the top of my head, I-- yeah? STUDENT: So we can treat the velocity as a constant even though it's losing energy. PROFESSOR: Yes, that's-- well, we'll call it a crude derivation. But if we're assuming that the electron basically doesn't change position, that it changes so little, then we're going to assume that also the velocity basically doesn't change, that one collision for a high enough velocity doesn't lose that much energy. So that's what we're going with for now. And we'll actually be able to compare this kind of crude derivation to one done from quantum mechanics, and they look pretty similar. There's like an extra factor of two or something. But as I showed you guys in preparing for the test, when I said 9 equals about 10, it therefore follows that 1 equals about 2, and as long as we get the constants and orders of magnitude right, we're going to gain the physical intuition for what we're looking at. I'll leave it up there. Anyway, evaluated this integral, and it came out to something like 2 over b squared. So this just comes out to k0 ze squared b with a 2 over 2 vb squared. Cancel the b's. I don't know where that 2 came from. Whatever. Yeah, that's what we have for the stopping power for this sort of one particle hitting one electron. Now, we have-- well, sorry, that's the momentum equation. But we're interested in the change in energy. So what's that equation we've used before to go from momentum to energy? Our kinetic energy t. STUDENT: Square root of q [INAUDIBLE].. PROFESSOR: Other way around. So let's do it that way, right, so p equals root 2 mT. OK, so square both sides. Yeah, you got it. And we have our energy T is p squared over 2m. So let's take this small little mess, stick it in here, and then we end up with 4k0 squared little z squared e to the fourth over 2mv squared b. Cool. And so this gives us the little differential energy change from one electron collision. Yeah? STUDENT: [INAUDIBLE] PROFESSOR: I think we cancel one of those, right? STUDENT: Yeah, but then when you square [INAUDIBLE].. PROFESSOR: Oh yeah, you're right. Thank you. Comes back. b squared b squared. Yep, you're right. Thank you. So now we've only accounted for the ion hitting a single electron as it moves through this hollow cylinder of whatever medium it's going through. So this is when we can kind of take things back from abstract to reality and say, all right, it's moving through some actual material. And we have to describe its electron density in this cylindrical shell. So the electron density in the cylindrical shell depends on-- well, the number density of the material itself, just how many atoms are there times big Z, the number of protons in that nucleus and therefore the number of electrons in each nucleus, and the volume of the cylindrical shell. So what's the expression for the volume of the cylindrical shell? In differential form? Yeah, I started hearing? I heard a 2. That's correct. Keep going. Well, 2 pi b gives us the circumference of the circle on the outside of the cylinder. And if we add a little db there, some differential thickness element, and we add on a little dx for some differential distance down the cylinder, we end up with 2 pi b dv dx, multiplied by this stuff. And we get some differential change in energy scales like-- let's say there is a 4 and a 2 there. So we end up with 4. That's not a 4. pi k0 squared, little z squared, big Z e to the fourth b dv dx over mv squared b squared. Now those other b's cancel. We can divide everything by dx. And we've already almost got our stopping power expression. We're getting pretty close. Anyone see some similarities between the one I just threw out of my head and what we've got so far? We've almost got the makings of it. So now if we want to account for the fact that our charged particle is probably not shooting through the center of a perfect hollow cylinder, but we're just firing it into like actual matter, we have to account for every possible impact parameter in every possible cylindrical shell that it would be moving through. So in this case, we can integrate this. We've already got our db right there. That's our integrating variable. And now here's where things get a little tricksy. Can we actually integrate this from an impact parameter of 0? And this is not an easy question actually. What do you guys think? STUDENT: [INAUDIBLE] PROFESSOR: So Luke says no, why? STUDENT: [INAUDIBLE] PROFESSOR: We actually have an over b. We have a v squared, but that's not our integrating variable. Yeah, so we have like a 1 over b looking-- STUDENT: [INAUDIBLE] PROFESSOR: Yeah, that's fine. STUDENT: [INAUDIBLE] PROFESSOR: That's true. There's another more physical reason though. But you're right mathematically. Can you know precisely the location of an electron ever? Now I see a lot of people saying no. Why do you say that? STUDENT: [INAUDIBLE] PROFESSOR: That's right. There's this thing-- the De Broglie uncertainty principle. It's kind of the punchline of a lot of quantum mechanics jokes. You never know where something is going to be or where it's going. We used to say this about some of the older professors in this department. If you call them and say, I'm on my way, I'm getting there as fast as I can, they could be anywhere in the world. And if they say, don't worry, I'm three miles away. You don't know how long it's going to take them to get here. Same thing with me and getting here. Although I was on MIT standard time, which means five minutes late. Not bad. So in this case, we have to ascribe the electron some sort of a wavelength. So in this case, we can't just treat the electron like a particle whose position we know. We're going to go with our original equation for a photon energy, which looks like hc over lambda. Rearrange that so that we'll have some lambda wavelength equals hc over E. I'm sorry. This is a momentum thing, not an energy thing. And what's the momentum of the electron? From the classical definition? It's just mass times velocity, right? So we'll just stick in the mass of the electron times the velocity right there. And this wavelength right here, the De Broglie wavelength of the electron is as close as we can specify that impact parameter. And it turns out to be pretty significant, like on the order of 0.1 to 0.2 angstroms. You can't tell where an electron is going to be finer than that. So we're going to have this b minimum. I'll just write that in there and some b maximum, where are b minimum is the same as our De Broglie wavelength of the electron, because we can't define its position any better than that, which is just Planck's constant over its mass times velocity. For b max, it comes out to something like hv over this quantity, I bar what's called the mean ionization potential. What this quantity physically represents is that if your impact parameter is too large, then the electron will-- or the charged particle will feel so little force, that it won't eject an electron and won't really be deflected. And the farthest away it can be corresponds to the minimum amount of energy to create an average ionization in the material. And this mean ionization potential scales with something like this constant k times z, where k is on the order of like 30 to 35-- think it's like eV. But remarkably tight constant, so picking a mean value like that is no problem. And there we have our b min and b max. Those are our limits of integration. I think I planned this just to fill up the boards today. So let's write out the final integral that we have and see what we get. So we have our stopping power, should be integral from b min h over mv to b max jv over I bar of 4 pi k0 squared little z squared big Ze to the fourth over mv squared b db. And like I think it was Sarah that you mentioned that we'd have a log. I forget who's mentioned it. Sorry. That was Luke, OK? You're right. So it ends up just being a natural log. It's like all this junk on the outside times the integral of 1 over b. So this just comes out to 4 pi k0 squared little z squared big Ze to the fourth over mv square times the natural log of dv max over b min. The h's cancel. We get a v squared. And so all this stuff inside just becomes the natural log of mv squared over mean ionization potential. And we've arrived basically at the same equation that we have over there, that I took care to memorize on the plane. So great that we've gotten here through the math. Let's actually see what this means, and we're going to go into some of the limits of validity like Luke was saying, where you can't have a natural log of 0. So the stopping power formula isn't quite going to work at 0. Nor will it work at super low energies. So if you want to write what this should be proportional to. I kind of see some constants here that we don't really care. They don't vary at all. But this looks kind of like a kinetic energy term, doesn't it? Like kinetic energy terms. So it's kind of proportional to this function 1 over t times the natural log of t. When you get rid of all the constants and just express it in terms of the variables, it looks a whole lot simpler. And so let's see what this would look like if we started to graph it out. And this is pretty universal for any charged particle stopping power. So if this was the kinetic energy t, and this was our stopping power, we've got this 1 over T term that's going to look something like this. And we have this natural log of T term, which is going to look something like this that actually goes down to infinity. So like Luke was saying, if these two are multiplied by each other, we're not going to have negative infinity as a stopping power, which would physically mean that once the particle hit zero energy, it speeds up to infinite speed, and that doesn't make any sense. But we can start to draw what the curve would look like with this general envelope. And so at low energies, the stopping power kind of scales like 1 over e. Let's now start drawing another graph with a little more physical intuition, the range of the particle. So while stopping power might be kind of a new quantity that represents the differential amount of energy lost as a function of distance-- that's kind of a mouthful-- the range is pretty simple, just how far it goes. So to get the range from the stopping power, you can integrate-- let's say you fire a particle into a bunch of matter at some energy T. So you start off an energy T, and you want to see how far it gets at distance 0. Well, you can just integrate the stopping power as a function of T, or you can switch your limits of integration. Let's see. I'm sorry. I'm not going to switch those limits of integration yet, which is like saying from 0 to T of dt dx dt, which is like saying from 0 to T of dt dx to the minus 1 dx. Much simpler expression. And when you forget all the crazy constants, and you just take this kind of form as the variable part of the expression for stopping power, unless your energy is really high, and this natural log counts at all, your range kind of scales like the integral of just 1 over T. I'm sorry, that to the minus 1, which is like the integral of T, which is like T squared, which means that this are pretty interesting intuitive result, that the range of the particle increases with the square of its energy. So this gives you a good hint to say, if I increase the particle by a certain amount, I'll increase the range by the square root of that increase. So anyway let's start drawing this range curve as a function of x. What this says right here is that if we start our particle at some high energy, and we're firing into the material, and it's losing energy as it goes, and we track this value of the stopping power to figure out how far it's going to go, change that in a second, for the first little while as this particle loses more and more energy, its stopping power stays mostly constant, and it loses a pretty constant amount of energy as a function of time. As its energy gets lower and lower, it loses more and more as a function of position. What this actually means is that as the velocity goes down or as the particle's energy goes down, it spends more time in the vicinity of the electron and gets deflected more. It's just that kind of a simple argument. Like the more time it spends near this electron, the more it feels the push. And so it will lose more and more energy as its energy gets lower and lower until you hit the point where this curve breaks down. Where do you guys think that is? Even mathematically speaking. Well, what happens if your natural log term is negative here? Then you get a negative stopping power, which would be like the particle picks up energy. That's not quite physical at all. So in reality, we know that at some point it's going to taper off, and the stopping power at 0 should be 0. This maximum right here occurs around 500 times the mean ionization potential, which is a pretty low energy, but what this actually says is that when the particle's moving really slow, it's moving so slow that once in a while, it will capture one of those electrons, like if you fire in a proton or a positively charged heavy ion, if it's moving so slow that it can feel the pull, it will just partially neutralize. And that becomes the next mechanism of energy loss. And so if we keep following this curve, once we hit some sort of a maximum, then it's going to lose less and less and less energy, do less and less damage, and you end up with the same curve, this kind of brag peak curve that we did together when we used the SRIM code. Did I go through the stopping range of ions in matter with you guys? Did I show you this on the screen? Remember the curve of the-- let's say damage events per distance or number of implanted ions as a function of distance. You end up with the exact same thing. That's what the SR stands for in SRIM is stopping, what is it, stopping power and range or something like that. Something range of ions and matter. Is it the stopping power and range of ions and matter? I don't know, but all SRIM is a gigantic stopping power database and a big Monte Carlo engine. So it takes an equation just like this one or-- yeah, just like this one, that one, whichever one you want and decides, well, how often is the particle going to lose how much energy depending on where it happens to be? And that's all there is to it. This point here, we would call the range or like the average range at which-- well, that's not quite right. There'd be some average range around here where the particles actually stop, when their energy goes to zero. And in reality, because every one of these processes is random in nature, the impact parameter is going to be kind of random. Not every particle will stop at the same place, because all the electrons are moving around in the atoms. So there's going to be some sort of a range of ranges, which we call straggling, which is to say that not all of the particles end up at the exact same range, but they end up pretty close. I think I want to pause here for a seconds and see if there's any questions from this four board derivation or any intuition questions that you guys might have. Yeah. STUDENT: You didn't initially have the negative d2 over dx. PROFESSOR: Oh yeah, where'd that go? Let's trace this through. STUDENT: What happened to the negative? [INTERPOSING VOICES] STUDENT: But we didn't actually derive it. PROFESSOR: Yeah, let's see. So the change in energy should have been a negative change in energy. So if we-- it went missing there. So there we go. That's the only other place that seems to be missing. OK, great. Cool. Then if you want to start looking at the number of damage events that this particle will incur, we'll call this the number of ion pairs, which might look suspiciously familiar if you guys remember the Chadwick paper, he was talking about this proton of this energy should make this many ion pairs at this distance. Now you guys actually have the tools to find out what that number should be, because it's going to be 1 over some ion pair energy, usually around 30 to 35 eV, depending on the material times dt vx, which is to say when the stopping power is higher, you're going to have more ion pairs produced as a function of distance. So the real label for this y-axis here should be like ion pairs or damage or defects or anything like that that refers to the same kind of thing as damages to the material, either by ionization or even by similar nuclear processes. And so that's what results in those SRIM curves that we showed from before, where you have, let's say, a bunch of protons entering a material here at some high energy. They don't lose very much energy when they go in, but as soon as they get to a low enough energy where they're stopping power reaches the maximum, they dump most of their energy in there. And this is the basis behind proton cancer therapy, which I mentioned to you guys in the first or second day of class. Now that we know both exponential attenuation and stopping power, we can explain theoretically why proton therapy is a more effective treatment. So let's say this is the person that contains a tumor. Say it's right there. And you have a choice between firing in an X-ray or firing in a proton. What is the dose to this person, not just to the tumor, but through the whole person going to look like for X-rays? Get another board. So if we look at the number of ion pairs, and let's say this is the thickness of the person, and the tumor is in this range. And if you send in your X-rays, or you send in your protons, what is the number of ion pairs produced from X-ray or from proton going to look like in either case? So first of all, who wants do the X-ray one or tell me what it will be? You guys-- yeah, Luke? STUDENT: Would it be pretty flat? PROFESSOR: It'll be fairly flat, but there would be some decay to it. So let's say we defined some initial intensity, X-rays just get attenuated exponentially. So you're going to do a whole lot of damage to the person before the X-rays reach the tumor, which is why when you do X-ray therapy, you have to send in x-rays from a bunch of different locations so that the tumor gives out the most, and the rest of the person in any location doesn't get that much dose. For proton therapy, it's quite different. It looks just like this. So you might do a little bit of damage as you go in, and you tune the energy of those protons, so that they do all the damage in the tumor, and then they stop in the tumor or just beyond it, so that they don't do any more damage to the rest of the person, and they do very little going in. And so that's why proton therapy centers are popping up all over the world because it's a more effective treatment. It's also more expensive because you need a proton accelerator so then, here's a question for you. This is something they actually do in the lab. Say, here's your human. There's your tumor. There is your proton gun at a fixed 250 MeV, firing protons out. How do you change the range of those protons without changing their energy? STUDENT: The distance it has to travel? PROFESSOR: Is what? STUDENT: The distance it has to travel the other way. PROFESSOR: The distance it has to travel specifically? I mean, if they travel in a vacuum, do they lose energy? No. So what can you do? STUDENT: [INAUDIBLE] PROFESSOR: You could deflect them and change their direction. But as we'll get into on Tuesday, if you deflect them, they're going to emit lots of X-rays in the form of Bremmstrahlung. So that's probably not what we want to do. You can put stuff-- and I can't be any more specific than that-- in between the proton beam and the patient, because if the stopping power for 250 MeV protons and 50 MeV protons basically doesn't change, then you just put things in the way. So let's say this is the thickness of the person. You just put some tissue equivalent stuff, or what they'll call a phantom, so some tissue equivalent gel or water or some other stuff to lower the proton energy without deflecting the beam that much. So as you guys saw in the SRIM simulation, if you track the 3D positions of these protons as they enter into the material, they all go pretty straight, and then they start getting funny. Computer can fly the ions faster than I can. But no matter what it goes through here, while the protons have high energy, they don't get deflected much, they don't lose that much energy. And you can very finely tune the amount of stuff in the way. This would be the stuff section before entering the person. So this is why it's so useful. So let me check the time, because I haven't checked at all. The clock's broken. Oh, we have like 10 minutes. So now is a good time to stop for questions and see if you guys have any questions from the derivation or the sort of physical meaning of stopping power and matter. Yeah. STUDENT: What was the nz sort of [INAUDIBLE]?? PROFESSOR: Yes, the nz, n is the number density of atoms. So if you're traveling through some actual block of matter, it depends how many atoms are in the way. So if the total stopping power decreases with decreasing density, like if you're going through tungsten, but it happens to be a tungsten gas, that tungsten gas will not have nearly as much stopping power as tungsten metal, because there's just more tungsten than the weight. Or less, I'm sorry. And the z right here is the charge per atom, to say if you're firing electrons into something, the strength of the Coulomb force that they'll feel, or let's say, the number of electrons that they can smack into is the same with the number of protons in that nucleus if we're not using ionized materials. And we're typically not firing anything into ionized materials. It's just normal neutral matter. Does that make more sense? Cool. Yeah, Luke. STUDENT: Where did that n go? PROFESSOR: It should have been there. Thank you. pi n. Should absolutely be there. Anything else? Yeah, Dan. STUDENT: [INAUDIBLE] PROFESSOR: OK. STUDENT: [INAUDIBLE] PROFESSOR: The charge per atom is big Z. The charge on the particle is little z, because both of them actually matter. So little z tells you the strength of the interaction between the particle and each electron. Big Z tells you how many electrons are there per atom. Big N tells you how many atoms are in the way. And in that way, you have a complete description of the material. Curious that the mass of the charged particle is absent from this formula. Isn't it? Yeah. The mass doesn't matter. You will certainly change the momentum of, let's say, a charged particle less. But in the end, it's just non-contact Coulomb forces that determine the energy transfer between the electrons in there and the charged particles slowing down in the medium. So that is a curious thing to look at, but it is intentional. For the case of this ionization or electronic stopping power, the mass does not enter into it at least in this formula. There is another version derived that's in your reading that they just kind of plop it in front of you that's got the mass somewhere in the natural long term somewhere where it really doesn't change much at all, except for really high energies. So if you want to think about, well, what do I want you to know, I would want you to be able to go through this derivation again, so that I can know you can go from a intuitive example to an actual equation you can use, graph what that equation should look like and talk about where it really matters and where it breaks down. Like mathematically speaking, if this natural long term is negative, you're not going to have a negative stopping power. Something else has got to occur, and what's happening here is neutralization. And that's why the stopping power curve diverges for really low energies, because sometimes electrons get captured. Any other questions on stopping power? Cool. This is a good place to stop for now. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 7_QEquation_Continued_and_Examples.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: So I want to do a quick review of what we did last time, because I know I threw-- I think we threw the full six boards of math and physics at you guys. We started off trying to describe this general situation. If you have a small nucleus 1 firing at a large nucleus 2, something happens, and we didn't specify what that was. A potentially different nucleus 3 could come shooting off at angle theta, and a potentially different nucleus 4 goes off at a different angle phi. Just to warn you guys, before you start copying everything from the board, starting last week I've been taking pictures of the board at the end of class. So if you prefer to look and listen or just take a few notes rather than copy everything else down, I'll be taking pictures of the board at the end of class from now on and posting them to the Stellar site. So up to you how you want to do it. We started off with just three equations. We conserve mass, energy, and momentum. Mass and energy-- let's see-- come from the same equation. c squared plus T1 plus M2 c squared plus T2 has to equal M3 c squared plus T3 plus M4 c squared plus T4. We started off making one quick assumption, that the nucleus 2, whatever we're firing things at, has no kinetic energy. So we can just forget that. What we also said is that we have to conserve x and y momentum. So if we say the x momentum of particle 1 would be root 2 M1 T1 plus 0 for particle 2, because if particle 2 is not moving, it has no momentum. Has to equal root 2 M3 T3 cosine theta, because it's the x component of the momentum, plus root 2 M4 T4 cosine phi. And the last equation for y momentum-- we'll call this x momentum, call that mass and energy, call this y momentum-- was-- let's say there's no y momentum at the beginning of this equation. So I'll just say 0 plus 0. Equals the y component of particle 3's momentum, root 2 M3 T3 sine theta, minus-- almost did that wrong-- because it's going in the opposite direction, root 2 M4 T4 sine phi. We did something, and we arrived at the Q equation. I'm trying to make sure we get to something new today. So the Q equation went something like-- and I want to make sure that I don't miswrite it at all. So when we refer to the Q equation, we're referring to this highly generalized equation relating all of the quantities that we see here. So I'm not going to go through all of the steps from last time, because, again, you have a picture of the board from last time. But it went Q equals T1 times M1 over M4 minus 1 plus T3 times 1 plus M3 over M4 minus 2 root M1 M3 T1 T3 cosine theta. And last time we talked about which of these quantities are we likely to know ahead of time and which ones might we want to find out. Chances are we know all of the masses involved in these particles, because, well, you guys have been calculating that for the last 2 and 1/2 weeks or so. So those would be known quantities. We'd also know the Q value for the reaction from conservation of mass and energy up there. And we'd probably be controlling the energy of particle 1 as it comes in. Either we know-- if it's a neutron, we know what energy it's born at. Or if it's coming from an accelerator, we crank up the voltage on the accelerator and control that. And that leaves us with just three quant-- two quantities that we don't know-- the kinetic energy of particle 3 and the angle that it comes off at. So this was the highly, highly generalized form. Recognize also that this is a quadratic equation in root 3, or root T3. And we did something else, and we arrived at root T3 equals s plus or minus root s squared plus t, where s and t-- let's see. I believe s is root M1 M3 T1 cosine theta over M3 plus M4. And we'll make a little bit more room. t should be-- damn it, got to look. Let's see. I believe it's minus M4 Q plus-- oh, I'll just take a quick look. All right, I have it open right here. I don't want to give you a wrong minus sign or something. I did have a wrong minus sign. Good thing I looked. 1 times E1 over M3 M4. And so we started looking at, well, what are the implications of this solution right here? For exothermic reactions, where Q is greater than 0, any energy E1 gets this reaction to occur. And all that that says, well, it doesn't really say much. All that it really says is that E3-- I'm sorry-- T3-- and let me make sure that I don't use any sneaky E's in there-- plus T4 has to be greater than the incoming energy T1. That's the only real implication here, is that some of the mass from particles 1 and 2 turned into some kinetic energy in particles 3 and 4. So that one's kind of the simpler case. For the endothermic case, where Q is less than 0, there's going to be some threshold energy required to overcome in order to get this reaction to occur. So where did we say? So, first of all, where would we go about deciding what is the most favorable set of conditions that would allow one of these reactions to occur by manipulating parameters in s and t? What's the first one that you'd start to look at? Well, let's start by picking the angle. Let's say if there was a-- if we had what's called forward scattering, then this cosine of theta equals 1. And that probably gives us the highest likelihood of a reaction happening, or the most energy gone into, let's say, just moving the center of mass and not the particles going off in different directions. Let's see. Ah, so what it really comes down to is a balance in making sure that this term right here, well, it can't go negative. If it goes negative, then the solution is imaginary and you don't have anything going on. So what this implies is that s squared plus t has to be greater or equal than 0 in order for this to occur. Otherwise, you would have, like I said, a complex solution to an energy. And energy is not going to be complex. That means the reaction won't occur. So this is where we got to last time. Yes. AUDIENCE: When you say s squared plus t is greater than 0 or greater than or equal to 0, it's endothermic, wouldn't it also be greater than or equal to 0 for an exothermic? MICHAEL SHORT: It would. But there's a condition here that-- lets see. In this case, for exothermic, Q is greater than 0 and that condition is always satisfied. For an endothermic reaction, Q is negative. So that's a good point. So if endothermic, then Q is less than 0, and it's all about making sure that that sum, s squared plus t, is not negative. What that means is in order to balance out the fact that you've got a negative Q here, you have to increase T1 in order to make that sum greater than or equal to 0. Yes. AUDIENCE: So that condition, s squared plus t is greater than or equal to 0, is that basically a condition for the endothermic reaction to occur? MICHAEL SHORT: That's correct. If s squared plus t is smaller than 0, which is to say that this whole sum right here doesn't help you balance out the negative Q, then the reaction is not going to happen. And something else might happen. So let's say you were looking at a case of inelastic scattering where a neutron would get absorbed by a nucleus and be re-emitted at a different energy level. If the energy is too small for that to occur, then the neutron is not going to get absorbed. Instead it might bounce off and undergo elastic scattering. And as a quick flash forward, I'll show you a quick plot of elastic and inelastic cross-sections that kind of hammers this home. You'll be looking at a lot of these plots, that are going to be logarithmic in energy space and probably logarithmic in microscopic cross-section, to bring back that variable from before. If you remember, the cross-section is like the probability that a certain reaction is going to occur. The larger the cross-section, the higher the reaction rate for a given flux of particles. And let's say we'll split this into two things. We'll call it sigma elastic and sigma inelastic. And we'll give them-- that will be-- oh, we have colors. Let's just use those. Even better. Where's my second color? Under the paper. Awesome. So let's say the elastic cross-section is in red, and the inelastic cross-section is in green. And for white, we'll plot sigma total. Usually, one of these cross-sections for any old interaction-- I'm not even being specific on which one. Let's just say neutrons hitting something big-- would tend to look like this. There will be some insanity here, that we'll discuss, and it might start to increase a little bit as it goes to high energies. And this is definitely not to scale. Just for the purposes of illustration. The elastic cross-section is going to look something like exactly this. See how closely I can draw on top of it. So at low energies, when a neutron can't be absorbed and re-emitted at a different energy, the inelastic scattering process can't happen. Which is to say that the incoming kinetic energy-- so this log of E right here, better known as T1 in the symbols up there-- it's not high enough to allow inelastic scattering to occur. So if we want to graph the inelastic scattering cross-section, it will typically look like that, where once you reach your threshold energy, determined by that condition there, then the inelastic scattering turns on and it's actually able to proceed. So this is why we're getting into these threshold reactions, because it helps you understand why do some of the cross-sections that we study have the shapes that they do. And this holds true for pretty much every inelastic cross-section I've seen, is they all-- almost all of them, if you're starting at the ground state, require some initial energy input to get going. Whereas elastic scattering can happen at any energy. So let's write that last condition right here. Elastic scattering, which means things just bounce off like billiard balls, you have Q equals 0. No energy changes hands, so to speak. You just get some kinetic energy from 1 being imparted to nucleus 2, but you don't turn any mass into energy. So any questions here before we go into-- yes. AUDIENCE: If theta isn't 0 for cosine theta, how would you plug it in? If you don't know what the angle is, that it's not [INAUDIBLE]?? MICHAEL SHORT: So the question is, if you don't know the angle, what do you do about it, right? If you don't-- so in this case, we've said, what is the bare minimum threshold for this reaction to occur, and the best way for that to happen is for theta to equal 0. If theta is larger, that will actually mean that the reaction is not allowed to proceed unless you get to an even higher energy. So this condition still holds. But if cosine-- so let's say if cosine theta is less than 1, then the value of s goes down, and that makes this condition harder to satisfy. So that's a good question. What that actually means is that for certain nuclear reactions very close to the threshold energy, only certain angles are allowed. I'm not going to get into the nitty-gritty of which angles are allowed. I think it's-- I'll call it minutia for the scope of this class, but it is in the Yip reading, which I'll be posting pretty soon. But suffice to say that the only time you can-- let's say if s squared plus t equals 0. The only time that can happen is when theta equals 0 or cosine equals 1. And that means that the nucleus can only recoil in a very, very narrow cone forward. As that energy increases, the allowable angles start to increase further. Does that answer your question? AUDIENCE: Well, yes. So you're just saying [INAUDIBLE] 0 is the minimum [INAUDIBLE]. MICHAEL SHORT: Or you'd say-- let's say if cosine equals 180. Then-- I'm sorry. If theta equals 180, cosine would be negative 1. And that would be, let's say, the least favorable condition. Yes. AUDIENCE: Why did you put the sigma total under sigma elastic? MICHAEL SHORT: Oh, I'm saying-- so sigma elastic plus-- sorry-- sigma inelastic would, let's say, give you the total scattering cross-section. AUDIENCE: And then what was the green line? MICHAEL SHORT: The green line-- oh, it is a little hard to see. The green line is the inelastic cross-section. Yes. I can imagine from back there the green and the white might look a little similar, yes. OK, cool. Yes. Like one of these, they give they give me an almost black one. That's about as invisible as it gets. So make sure to use visible colors. OK. So now let's take the case of elastic neutron scattering. And can anyone tell me how can we simplify the general Q equation for the case of neutrons hitting some random nucleus? What can we start plugging in for some of those values to make it simpler? What about the masses? What's M1 in atomic mass units? AUDIENCE: 1. MICHAEL SHORT: M1 is just 1 to a pretty good approximation. It's actually 1.0087, which we're going to say is 1. And what about M3? AUDIENCE: 1. MICHAEL SHORT: Is also-- yes, also 1. If this is elastic neutron scattering, the same neutron goes in and goes out. So M1 and M3 are the same thing. What about M2 and M4? Have we specified what this nucleus is? So what mass would we give it if it's a general nucleus with N neutrons and Z protons? AUDIENCE: A. MICHAEL SHORT: A, sure. So A, that's again A for the mass number. Cool. So with those in mind, and then the last thing is we only have T1 and T3. Just for clarity, let's call T1 Tin, like the neutron energy going in. And T3, we'll call Tout. So let's rewrite the Q equation, the Q-eq, with these symbols in there. And the last thing to note, what's Q for elastic scattering? 0, yes. Because no mass is turned into energy or vice versa. So to rewrite the whole Q equation, we'll get 0 equals Tin times M1 over M4, which is just 1 over A, minus 1 plus Tout times 1 plus M3 over M4. M3 is also 1, M4 is also A. And minus 2 root M1 M3. They're both 1. Tin Tout cosine theta. So this looks a whole lot simpler. I'm going to do one quick thing right here and take the minus sign that's hiding in here outside this equation. It's going to make the form a lot simpler. So I'm just multiplying the inside and the outside of this term by negative 1, but hopefully you can see that it's the same thing. It will just make the form a lot nicer in the end. And so now we want to start asking, what is the maximum and minimum energy that the neutron can lose? So let's start with the easy one. What is the minimum amount of energy that the neutron could lose? Anyone? I hear some whispers. AUDIENCE: 0. MICHAEL SHORT: 0. And if the neutron comes in-- if theta equals 0, then you end up with actually Tin will equal Tout. And, that way, let's say delta T1 or delta T neutron could equal 0. So a neutron can lose at least none of its energy in an elastic collision. Hopefully that makes intuitive sense because we would call that a miss. Now let's take the other case. At what angle would you think the neutron would transfer as much energy as possible to the recoil nucleus? So if we have a big nucleus of mass A and we have a little neutron firing at it, at which angle does it transfer the most energy? Yes. AUDIENCE: Pi? If it's like-- MICHAEL SHORT: Exactly. AUDIENCE: [INAUDIBLE]. MICHAEL SHORT: At theta equals pi, which means this-- we call this backscattering. So, yes, good one. I'll correct your statement, though. You said if the neutron just stopped and the nucleus moved forward. Does not happen in every case. For example, if you were to-- and I'd say don't try this at home, kids-- put on a nice helmet and run charging at a truck, can you actually just stop cold? And we're not assuming any bones breaking or anything. Chances are you'd bounce right back off. Yes. That's the analogy I like to give for what happens when a neutron scatters off uranium. It's like running at a truck with a helmet on. It will just bounce back. So in the case of theta equals pi-- so we're going to substitute in theta equals pi. Therefore, cosine theta equals negative 1, and we have an even easier equation. 0 equals, let's just say, Tout times 1 plus 1 over A. I'm going to arrange these terms in order of their exponent for Tout since that's our variable again. And if this stuff is negative 1, then the 2 minus signs cancel conveniently. And we have plus 2 root Tin Tout. Let's see. That's it. And we have minus Tin times 1 over 1 minus A. Ideally, we'd like to try to simplify this as much as possible. So let's combine. Let's try to get everything in some sort of a common denominator, because that would make things a lot easier. If we multiply each of these 1's by A over A-- so let's put that as a step, because we can totally do that-- we get 0 over Tout times A over A plus 1 over A plus 2 root Tin Tout minus Tin times A over A minus 1 over A. At this point, we can-- well, everything's in common terms, right? We can just extend that fraction sign and put the sign in here. Extend the fraction sign, put the sign in here. And we'll just say that's A. We'll say that's A. Last step that we'll do is try and isolate Tout so at least one of our quadratic factors is going to be simple, like 1. So next step, divide by A plus 1. And then we get 0 equals just Tout plus 2 over A plus one root Tin Tout minus Tin times A minus 1 over A plus 1. Now we've got a simple-looking quadratic equation, even though it's quadratic in the square root of Tout. Yes. AUDIENCE: What happened to the A from the denominator? MICHAEL SHORT: Let's see. AUDIENCE: Could it be Tout over A? MICHAEL SHORT: What did I do? Did I miss an A or dividing by A? AUDIENCE: The last two equations. MICHAEL SHORT: It's from back here? AUDIENCE: No, no, no. It's probably the step you just did. MICHAEL SHORT: Just these steps. AUDIENCE: So you divide by A plus 1. MICHAEL SHORT: Ah, I see. AUDIENCE: Should it be Tout over A? [INTERPOSING VOICES] MICHAEL SHORT: Yes, you're right. So I want to make sure I didn't skip a step in dividing an A. Let me just check something real quick. AUDIENCE: There should've been an A in the minus 2 square root. MICHAEL SHORT: Oh, you're right. If we go back to our Q equation-- let's see. There's an M4 missing, isn't there? That's it. Hah. See, this is what happens when you don't look at your notes. I'll go back and correct those, because then there should have been an over A. There should have been an over A. There should have been an over A. Thank you for pointing that out. And there should have been another-- oh, in this case we can just cancel all of the A's. I knew it came out nice and clean. OK, cool. So at this point, this is a quadratic in root Tout, where we have-- what are our a, b, and c terms for this quadratic formula? So what's a first of all if it's quadratic in root Tout? AUDIENCE: 1? MICHAEL SHORT: Just 1. That was part of the goal of this manipulation, is to make at least one of these things pretty simple. What's b? AUDIENCE: 2 over A plus 1 times radical Tin? MICHAEL SHORT: Yes. 2 root Tin over A plus 1. And c is just that whole term right there. I'll do this up here. So then we can say root Tout equals negative b plus or minus the square root of b squared. So that's 4 Tin over A plus 1 squared. Minus 4 times a times c, so just minus 4 times c. So minus 4 times negative Tin A minus 1 over A plus 1. So let's see what cancels. So, first of all, those minus signs cancel. And everything has-- oh, and over 2a. Don't want to forget that. Over 2a, which is just 2. First thing we note is that everything here has a 2 in it, either directly as a 2 or hiding as a square root of 4. So we can cancel all of those. 4, 4, 4, 4. Let me make sure that minus sign is nice and visible. What else is common to everything here? Well, I'll tell you what. I'll write it all out simpler without all the crossed-out stuff. Minus root Tin over A plus 1 plus or minus root Tin over A plus 1 squared plus Tin times A minus 1 over A plus 1. So with that written a little simpler, what's also common and can be factored out of everything? AUDIENCE: Square root of Ti? MICHAEL SHORT: That's right. Square root of Tin. Because there's a root Tin here, and then you can-- everything's got a Tin inside the square roots. You can pull that out. So we have a direct relation between root Tout and root Tin. Minus 1 over A plus 1 plus or minus root 1 over A plus 1 squared plus A minus 1 over A plus 1. What do we do here to simplify all the junk in the square root? AUDIENCE: Multiply the right side by A plus 1 over A plus 1. MICHAEL SHORT: That's right. You can always multiply by something, better known as 1. And that gets everything here-- just like there was a 2 or a root 4 everywhere in the equation, or there was a root Tin and a root Tin everywhere else in the equation, we'll do the same thing to get the A plus 1 out of there. So we'll multiply this by A plus 1 over A plus 1. I'll stick it over there. OK. And we get root Tout equals-- now everything has an A plus 1, so let's bring all of those outside the fraction. Root Tin over A plus 1 times negative 1 plus or minus the square root of 1 plus A minus 1, A plus 1. Starting to get a lot simpler. Let's see how much-- if I run out of space for this one. So this stuff right here is just A squared minus A plus A minus 1. The minus A and the plus A cancel out. And then the plus 1 and the minus 1 cancel out. And all that's inside the square root is A squared. So the only hopefully nonlinear board technique, I'm going to move to the left. And we end up with root Tout equals root Tin over A plus 1. And all that's left there is A minus 1 if we take the positive root. Almost done. Just square both sides. And we should arrive at a result that might look familiar to some of you. Tout equals Tin times A minus 1 over A plus 1 squared. And we've gotten to the point now where we can determine how much energy the neutron can possibly lose or the recoil nucleus can possibly gain in an elastic collision. It's this factor right here. I'll use the red since it's more visible. This is usually referred to in nuclear textbooks as alpha. It's sort of the maximum amount of energy a neutron can lose or a recoil nucleus can gain. So what we've arrived at is a pretty important result, that, let's say, the energy, the kinetic energy of a neutron, has to be between its initial kinetic energy and alpha times its initial kinetic energy. This right here is one of the ways in which you choose a moderator or a slowing down medium for neutrons in reactors. So it's this alpha factor right here that really distinguishes what we call a thermal-- or what is it? Like a light water reactor or a thermal spectrum reactor from a fast spectrum reactor. Let's look at a couple of limiting cases to see why. Let's see. Anyone mind if I hide this board here? Or you have a question? AUDIENCE: Yes. Can you explain why you ended up dropping the negative case? MICHAEL SHORT: Let's see. If we took the negative case, we'd end up with minus 1 minus A. You just have an A plus 1 on the top. Yes. So in that case, you just have-- let's see. You just have root Tin, right? Let me see. AUDIENCE: Negative root Tin actually. MICHAEL SHORT: Oh, yes. So that wouldn't make very much sense, right? Yes. So in that case, well, you don't want to have a negative energy. So that case doesn't make physical sense. Thanks for making sure we explained that. And did I see another question? Yes. AUDIENCE: Yes. What happened to the coefficients you had before Tin? You had 4. You needed 4 or 2, but [INAUDIBLE].. MICHAEL SHORT: Ah. OK. So what I did is I took the square root of 4 out of every term inside the square root and said, OK, they're all 2's. Just like in the next step, I said, all right, there's all of these A plus 1's, including all of these A plus 1's squares inside the square root, and took that out. Or I think even over here. Yes, so the whole thing here has been combine and destroy. Any other questions on what we did here before I go on to some of the implications of what we got? Cool. Let's look at a couple limiting cases. I'll rewrite that inequality right there because that's the important one of the day. So what is alpha for typical materials? Let's say for hydrogen. Alpha equals-- well, it's always A minus 1 over A plus 1 squared. And for hydrogen, A equals equals 1, equals 1. And then we have 1 minus 1 in the numerator. Alpha equals 0. What this means is that for the case of hydrogen, you can lose all of the neutron energy in a single collision. That doesn't mean that you lose all energy in every collision with a hydrogen atom if you're a neutron, but it means that you can lose up to all of your energy in one single collision. And this is what makes hydrogen such a good moderator or a slower down of neutrons, is when it undergoes elastic scattering, especially at energies below an MeV or so, which is where most of the neutrons in the reactor are, it just bounces around. And the more it hits hydrogens, the more it imparts energy to those hydrogens and slows down. Why do we want to slow the neutrons down in the first place? Well, that has to do with another cross-section, that I'm going to draw if I can find some chalk. Like I think I've mentioned before, every nuclear reaction has its own cross-section. And this time I'm going to introduce a new one called sigma fission, the probability, if a nucleus absorbs a neutron, that it undergoes fission and creates more neutrons and like 200 MeV of recoil energy. So in this case, I'll draw it for U235, since this is the one I pretty much remember from memory. And it looks something like that. So what you want is for the neutrons to be at low energies. So this would be around the thermal energy, better known as about 0.025 eV. Your goal is that the more neutrons that you have in this energy region-- oh, that chalk erases other chalk. The more neutrons you have in that energy region, the higher probability you have a fission. So this is the basis behind thermal reactors, is the neutrons all start here. They're born at around 1 to 10 MeV. They don't undergo fission very well at 1 to 10 MeV. So your goal as a thermal reactor designer is to slow them down as efficiently as possible. What's the most efficient way to slow down neutrons? Cram the reactor full of hydrogen. What's the cheapest and most hydrogenous substance we know? Water. This is why water makes such a good reactor moderator. It's pretty cool. There's also lots of other reasons that we use water. It's everywhere, which is another way of saying cheap. It's pretty chemically inert. There are corrosion problems in reactors, but it doesn't just spontaneously combust when you see air, like sodium does, another reactor coolant. It takes a lot of energy to heat it up. So it's specific heat capacity, the CP of water, if you remember-- I think it's-- was it 4.184 joules per gram? That's a point. One of the highest substances that we know of. So you can put a lot of that recoil energy or a lot of heat energy into this water without raising its temperature as much as a comparative substance. Metals can have heat capacity's like three or four times lower. So you wouldn't necessarily want to use a metal coolant. Or would you? In what cases would you want to use a metal coolant for a reactor? Has anyone ever heard of liquid metal reactors before? Just a couple. Good. I get to be the first one to tell you. I did my whole PhD on alloys for the liquid lead reactor. So let's take a look at lead, which has an A of about-- let's call it 200. I think there's some isotopes, like 203 or so. There's probably an isotope called lead 200. What would alpha be for lead? Well, let's just plug in the numbers. A minus 1 is 199. A plus 1 is 201. Square that. Almost 1. Almost. This means that when neutrons hit something like lead, basically don't slow down. They can lose at least none and at most almost none of their energy. And this is the basis behind what's called fast reactors if you want to use a coolant that keeps the neutrons very fast. Because for uranium 238, there's what's called a-- well, what you do is you want to capture neutrons with uranium 238, make plutonium 239, and then breed that. Or uranium 238 has got its fast fission cross-section. I don't think I want to get into bringing it on the screen today since we're almost five of five of. What I will say is there's lots of other reactor coolants besides water. And it sounds to me like almost no one had heard of a liquid metal reactor. Why would you want to use a liquid metal as a coolant besides keeping the neutrons at high energy? Anyone have any ideas? What sort of properties do you want out of a coolant? Not even for a reactor but for anything. AUDIENCE: Heat transfer. MICHAEL SHORT: Good heat transfer. Metals are extremely thermally conductive. So if you want to get the heat out of the fuel rods and into the coolant and then out to make steam for a turbine, liquid metals are a pretty awesome coolant to use because they conduct heat extremely well. What else? Let's try and think now. If you were a reactor designer, you don't just have to make the reactor work but you want to make it avoid accidents. What sort of thermodynamic properties about metals could prevent accidents from happening? AUDIENCE: They solidify. MICHAEL SHORT: Yes. So there's one problem. They could solidify. So coolants that have been chosen for reactors have been things like sodium, which melts just below 100 Celsius, liquid lead-bismuth, which melts at 123 Celsius. And I know because I've it in a frying pan before. So I did four years of research on liquid lead-bismuth, and if there's anyone that's gotten enough exposure to that, it's me. It does not seem to have affected my brain too much because I only made like two major mistakes on today's board. Good enough. Yes. So we've got hundreds of pounds of lead-bismuth sitting around. It's pretty inert stuff. It's really dense, so it can store a lot of heat. The other thing is the boiling point. Anyone know what temperature metals boil at? [INTERPOSING VOICES] MICHAEL SHORT: Extremely high, yes. Sodium boils at approximately exactly 893 degrees Celsius. Liquid lead-bismuth boils at approximately exactly 1,670 Celsius. You'll actually melt the steel that the reactor is made of before you boil off your coolant. And if you boil your coolant, you have no way of cooling the reactor, and that's something you want to avoid. Water boils at approximately 325-- let's say 288 to 340 Celsius depending on the pressure that's used in the reactors. And that does get reached sometimes, especially in accident conditions. So if you want to make something relatively Fukushima-proof, then you don't want the coolant to boil. So use a liquid metal, which introduces other problems. It also introduces some other problems. I'm going to flash forward a bit to neutrons and reactor design. Because it does take time for this scattering to happen. These collisions, they happen pretty quickly but they do take a finite amount of time. And in the meantime, you can have what's called feedback coefficients, natural bits of physics that help your reactor stay stable or they don't, depending on whether it's called negative or positive feedback. So we can have either negative or positive feedback. I'll give you one simple example that I'll just introduce conceptually, and we'll actually explain it a little mathematically later in the course. Let's talk about coolant density. If you were to heat up your reactor and the coolant were to get less dense, what do you think would happen to the reaction rate of, well, anything-- scattering, fission, absorption? AUDIENCE: Go down. MICHAEL SHORT: It should go down. Why do you think that is? AUDIENCE: Because there's not as many particles that's close together, so [INAUDIBLE]. MICHAEL SHORT: Exactly. Yes. To reintroduce a bit of the cross-section stuff I mentioned last time, the microscopic cross-section is the probability that, let's say, one nucleus hits one other nucleus. If you then multiply by the number density or how many of them are there, you end up with the macroscopic cross-section. So I'll label this as micro, label this as macro. And then the macroscopic cross-section times your neutron flux gives you your reaction rate. So if you want to get less moderating happening, or less fission, or less absorption, the simplest way is-- well, that's a property of the material. That's whatever your reactor is doing. You can decrease the number density by heating things up and decreasing the density. So this is one of those cases where you can use the reactor to quickly respond with physics before you could respond with human intervention. If you want, let's say, a extra power transient or a sudden increase in heat to slow down the nuclear reaction and not speed it up, you'd pick a moderator that behaves in this way. So in this case, water would get less dense, it would moderate less well, and put fewer neutrons in the high-probability fission region. Then let's think about what happens if you're depending on your neutrons to stay fast or at high energy. Let's say you were to have a really bad day and boil your liquid sodium. All of a sudden, what little moderating power exists in that sodium disappears or gets even lower, and that would cause your reactor power to increase. So one of the dangers of fast spectrum reactors is positive-- what's called positive void coefficient. Or if you make a bubble of gaseous sodium, your power increases rather than decreases. That would increase the heat. That would cause the power to go up. That would increase the heat. That would cause the power to go up. Luckily, there are many, many other negative feedback mechanisms that could be built in to make sure the overall feedback coefficients are still negative. This also lets us understand a little bit about what went wrong at Chernobyl. And I'll give you a 1-minute flashforward. Because the control rods that were made of-- let's see. I don't remember what the composition of the control rods is, but they were neutron absorbers. The control rods in Chernobyl looked something like this, where there was the absorber here. And this was-- they were graphite tipped. And as you lower that graphite down into the reactor, you're all of a sudden introducing something. Well, that's an OK moderator. For carbon, let's say A equals 12. So our alpha will be A minus 1 over A plus 1. I'm not going to write almost equal to 1 because that isn't quite almost equal to 1. What does this actually equal? Let's see. Let's just say definitely less than 1. There is some moderating power to graphite. It's also a very bad absorber. And what this meant, that as you lowered those control rods into the reactor, you suddenly introduced a little more moderation when things were already going bad, and that caused the power level to increase further. There were other problems, like it was designed so that if you boiled some of the coolant, you would have positive feedback. And that is the sort of 1-minute synopsis as to what all went haywire at Chernobyl. But we'll be doing a second by second or, in some cases, millisecond by millisecond play of what went wrong with Chernobyl. And we could probably do the same for Fukushima now, now that we understand what happened, based on the physics you'll be learning in this course. And this is actually a perfect stopping point, because next up we're going to be looking at the different processes of radioactive decay, many of which are a simplification of this Q equation, and I think some of which are probably familiar to you guys, because radiation decay is part of the normal lexicon, especially nowadays. So since it's five of five of, do you guys have any questions on what we've covered so far? Yes. AUDIENCE: [INAUDIBLE] if you have water as your coolant and it gets too hot, [INAUDIBLE].. MICHAEL SHORT: Yes. AUDIENCE: Right. And that will decrease the reaction rate. MICHAEL SHORT: Um-hm. AUDIENCE: And that's how like the [INAUDIBLE].. MICHAEL SHORT: It's actually-- are you asking about it the water feedback is part of the backup? That's your primary line of defense. Your backup is human intervention, because, compared to physics, humans are really, really slow, like many orders of magnitude slower. It takes microseconds for things to thermally expand. It definitely takes more than seconds for a human to respond. Anyone ever done those tests where you have a light blinking and you have to hit a button the second you see the light? What's the fastest any of you guys have ever responded? Anyone remember? Anyone beat a second? AUDIENCE: Maybe. MICHAEL SHORT: Maybe. A tenth of a second? [INTERPOSING VOICES] MICHAEL SHORT: And there all you have to do is hit a button. All you have to do is hit the only button when you see the only light. What if you're piloting something that's about as complicated as the space shuttle but more likely to explode? What do you think your reaction time will be? Probably long. You'll probably have to pull out the manual. And probably you'll have to RTFM for a little while. And maybe you'll find out what you have to do and maybe you won't. So it's actually operator error that has caused most of the near misses or actual misses in nuclear reactors. The physics, except for the Russian RBMK design that was for Chernobyl, usually it's human error that's the downfall of these things. So by understanding the physics here, we can rely on it to keep things safe. Yes. AUDIENCE: When you have a lead reactor-- or I don't know [INAUDIBLE] one of these, but what is like-- how do you cool the lead once it starts getting hot? MICHAEL SHORT: Ah, good question. How do you cool the liquid lead? You can't send liquid led through a turbine, right? So at some point you've got to make steam and use that to drive a turbine. You can use what's called a heat exchanger. At its simplest, you can think of it like a couple of tubes where the lead is going through here and the steam is going through here. And they have a very thin barrier between them, so you have all this heat moving from the lead, which is hotter, to the steam, which is colder. They actually have built a bunch of these led reactors. AUDIENCE: Is that real? MICHAEL SHORT: Yes. The Russian fast attack subs, the alpha class subs, were powered by and are powered by liquid led reactors. They're the only reactor that can outrun a torpedo. So when you have a liquid lead reactor powering and you've got a panic button that says, forget the safety systems. Outrun a torpedo. You have a choice between maybe dying in a reactor explosion and definitely getting shot out of the water with a torpedo. You do whatever you can. And these subs only run two or three not slower than a torpedo. So just like that old algebra problem, if this guy leaves Pittsburgh at 8 AM traveling 40 miles an hour, and I'm trying to get to Boston, 30 miles an hour, if a torpedo leaves one sub moving this velocity and the alpha attack sub senses it from this distance and starts moving at a similar velocity, chances are the torpedo runs out of steam before it reaches the alpha sub. And that's only because they can have an extremely compact liquid lead nuclear reactor at the power source. AUDIENCE: So can you [INAUDIBLE]?? How do you move [INAUDIBLE]? MICHAEL SHORT: Good question. How do you use the-- how do you move the liquid lead? You can move it by natural convection but that's extremely slow. So there are multiple ways of moving it. One of the cool ones is called an EM or Electromagnetic pump. It induces eddy currents in the liquid lead because it's also a conductor. And those eddy currents couple with the EM field from the EM pump and cause the lead to just start moving on its own. So it's a no moving parts pump. The only problem is it's like 1% or 2% efficient. Yes. So they only use those on the subs, but you can use EM pumps to move conductive coolants. So I think it's pretty awesome. And there have there have been land submarines. In fact, there's a company called AKME Engineering in Russia that's trying to commercialize a small modular liquid lead reactor. The other nice thing about these liquid metal coolants is you can make the reactors much, much smaller and denser than in a light water reactor. In a light water reactor, you're relying on a lot of water to cool things and a lot of water to be there to moderate your neutrons. In a liquid metal reactor, where you don't need moderation, well, you don't need-- all you need is enough coolant to keep things cool. So you can tighten stuff up and make it more compact. So that's one of the nuclear startups coming out nowadays. This is an awesome time to be in nuclear. When I started nuclear, there were approximately exactly zero nuclear startups. Like TerraPower didn't even exist yet. Now there's something like 52 in the US and others around the world. So like this is the time to be in nuclear if you're up for startups and not just working in academia, or a lab, or a big corporation. There's a lot of little companies now doing some crazy things based on some pretty good physics. So maybe time for one more question before I let you guys go. If not, then I'll see you guys on Tuesday when we start radioactive decay. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 22_Simplifying_Neutron_Transport_to_Neutron_Diffusion.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: I think I might actually use all 16 colors today. Oh no, this is the most satisfying day. Whereas Tuesday was probably the most mathematically intense, because we developed this equation right here, today is going to be the most satisfying, because we are going to cancel out just about every term, leaving a homogeneous, infinite reactor criticality condition. So we will go over today, how do you go from this, to what is criticality in a reactor? So I want to get a couple of variables up over here to remind you guys. We had this variable flux of r, e, omega, t in the number of neutrons per centimeter squared per second traveling through something. And we also had its corresponding non-angular dependent term, on just r, e, t, if we don't care what angle things go through. We've got a corresponding variable called current. So I'll put this as flux. We have current j, r, e, omega, t, and its corresponding, we don't care about angle form. And today, what we're going to do is first go over this equation again so that we understand all of its parts. And there are more parts here than are in the reading. If you remember, that's because I wanted to show you how all of these terms are created. Just about every one of these terms, except the external source and the flow through some surface, has the form of some multiplier, times the integral over all possible variables that we care about, times a reaction rate d stuff, where this reaction rate is always going to be some cross-section, times some flux. So when you look at this equation using that template, it's actually not so bad. So let's go through each of these pieces right here. And then we're going to start simplifying things. And this board's going to look like some sort of rainbow explosion. But all that's going to be left is a much simpler form of the neutron diffusion equation. So we've got our time-dependent term right here, where I've stuck in this variable flux, instead of the number of neutrons n, because we know that flux is the number of neutrons, times the speed at which they're moving. And just to check our units, flux should be in neutrons per centimeters squared per second. And n is in neutrons per cubic centimeter. And velocity is in centimeters per second. So the units check out. That's why I made that substitution right there. And this way, everything is in terms of little fi, the flux. We have our first term here. I think I'll have a labeling color. That'll make things a little easier to understand. Which is due to regular old fission. In this case, we have new, the number of neutrons created per fission, times chi, the sort of fission birth spectrum, or at what energy the neutrons are born. Over 4pi would account for all different angles in which they could go out, times the integral over our whole control volume, and all other energies in angles. If you remember now, we're trying to track the number of neutrons in some small energy group, e, traveling in some small direction, omega. And those have little vector things on it at some specific position as a function of time. So in order to figure out how many neutrons are entering our group from fission, we need to know, what are all the fissions happening in all the other groups? I've also escalated this problem a little bit to not assume that the reactor's homogeneous. So I've added an r, or a spatial dependence for every cross-section here, which means that as you move through the reactor, you might encounter different materials. You almost certainly will, unless your reactor has been in a blender. So except for that case, you would actually have different cross sections in different parts of the reactor. So all of a sudden, this is starting to get awfully interesting, or messy depending on what you want to think about it. There is the external source, which is actually a real phenomenon, because reactors do stick in those californium kickstarter sources. So for some amount of time, there is an external source of neutrons, giving them out with some positional energy angle and time dependence. So let's call this the kickstarter source. There's this term right here, the nin reactions. So these are other reactions where it's absorb a neutron, and give off anywhere between 2 and 4 neutrons. Beyond that, it's just not energetically possible in a fission reactor. But don't undergo fission. They have their own cross sections, their own birth spectrum. And I've stuck in something right here, if we have summing over all possible i, where you have this reaction be n in reaction, where 1 neutron goes in, and i neutrons come out. You've got to multiply by the number of neutrons per reaction. For fission, that was new. For an nin reaction, that's just i. But otherwise, the term looks the same. You have your multiplier, your birth spectrum, your 4pi, your integral over stuff, your unique cross-section, and the flux. And these two together give you a reaction rate. I've just written all of the differentials as d stuff, because it takes a lot of time to write those over and over again. And then we have our photo fission term, where gamma rays of sufficiently high energy can also induce fission external to the neutrons. The term looks exactly the same. There's going to be some new for photofission, some birth spectrum for photofission, some cross section for photofission, and the same flux that we're using everywhere else. Then we had what's called the in scattering term, where neutrons can undergo scattering, lose some energy, and enter our group from somewhere else. That's why we have those en omega primes, because it's some other energy group. And we have to account for all of those energy groups. That's why we have this integral there. And it looks very much the same. There's a scattering cross-section. And that should actually be an e prime right there. Make sure I'm not missing any more of those inside the integral. That's all good. That's a prime, good. There's also a flux. And then there was this probability function that a given neutron starting off an energy e prime omega prime, ends up scattering into r energy, e and omega. So this would be the other one. And this would be r group. But otherwise, the term looks very much the same. And that takes care of all the possible gains of neutrons into r group. The losses are a fair bit simpler. There is reaction of absolutely any kind. Let's say this would be the total cross-section, which says that if a neutron undergoes any reaction at all, it's going to lose energy and go out of our energy group d, e. Notice here that these are all-- these energies and omega is our all r group, because we only care about how many neutrons in r group undergo a reaction and leave. And the form is very simple-- integrate over volume, energy, and direction, times a cross-section, times a flux, just like all the other ones. Then the only difference in one right here is what we'll call leakage. These are neutrons moving out of whatever control surface that we're looking at. And this can be some arbitrarily complex control surface in 3D. I don't really know how to draw a blob in 3D. But at every point on that blob, there's going to be a normal vector. And you can then take the current of neutrons traveling out that normal vector, and figure out how much of that is actually leaving our surface ds. The one problem we had is that everything here is in terms of volume, volume, volume, surface. So we don't have all the same terms, because once we have everything in the same variables we can start to make some pretty crazy simplifications. The last thing we did is we invoked the divergence theorem, that says that the surface integral of some variable FdS is the same as the volume integral of the divergence of that variable dV. So I remember there was some snickering last time, because you probably haven't seen this since, was it 1801 or 1802? 1802, OK, that makes sense, because divergence usually has more than one variable associated with it. I'll include the dot, because that's what makes that divergence. So we can then rewrite this term. Let's start our simplification colors. That's our divergence theorem. So let's get rid of it in this form, and call it minus integral over all that stuff. Then we'll have dot, little j, to be careful, r, e, omega, t, d soft. So for every step, I"m going to use a different color so you can see which simplification led to how much crossing stuff out. And so like I said, this board's going to look like a rainbow explosion. But then we'll rewrite it at the end. And it's going to look a whole lot simpler. So now, let's start making some simplifications. Let's say you're an actual reactor designer, and all you care about is how many neutrons are here. Of the variables here, which one do you think we care the least about? Angle, I mean do we really care which direction the neutrons are going? No, we pretty much care, where are they? And are they causing fission or getting absorbed? So let's start our simplification board. And in blue, we'll neglect angle. This is where it starts to get fun. So in this case, we'll just perform the omega integral over all angles. We just neglect angle here. We forget the omega integral, forget omega there. Away goes the 4pi, because we've integrated overall 4pi star radians, or all solid angle. Let's just keep going. Forget the 4pi, forget the omega, forget the omega, forget the 4pi and the omega, and the omega. Same thing here-- forget the omega in the scattering kernel, forget it in the flux, forget it there, forget it there, and there as well. OK, we've now completely eliminated one variable. And all we had to do is ditch the 4pi and one of the integrals. What next? We're tracking right now every possible position, every possible energy, at every possible time. If you want to know, what is your flux going to be in the reactor at steady state, what variable do you attack next? Time. So let's just say this reactor is at steady state. That's going to invoke a few things. For one, it's going to ditch the entire steady state term. We're going to get rid of all the ts in all the fluxes. This shouldn't take too long to do. I think that's all of them. And the third thing is if this reactor is at steady state, chances are we've taken our kickstarter source out, because we just needed it to get it going. But the reactor should be self-sustaining once it's at steady state. So let's just get rid of our source term. I just want to make sure I didn't miss any here. OK, next up, let's go with green. What else do you think we can simplify about this problem? Well, if you look far enough away from the reactor, we can make an assumption that the reactor is roughly homogeneous. In some cases, it's not so good of an assumption, like very close to anything that has a huge absorption cross-section. Now, I want to explain the physics behind this. If the neutrons travel a very long distance through any group of materials, then those materials will appear to be roughly homogeneous to the neutrons. If, however, the neutrons travel through something that's very different from the materials around it, then that homogeneous assumption breaks down. So in what locations in a nuclear reactor do you think you cannot treat the system as homogeneous? Where do the properties of materials suddenly change by a huge amount? Yeah, Luke? AUDIENCE: Control rods. MICHAEL SHORT: Control rods, right, so let's say it's bad for control rods. Where else? How about the fuel? All of a sudden, you're moving from a bunch of structural materials where sigma fission equals 0, to the fuel where sigma fission, like you saw on the test, can be like 500 barns, which even though it's got a very small exponent in front of it, 10 to the minus 22 centimeters squared, it's still pretty significant. So this assumption breaks down around the control rods and around the fuel. But we can get around this. Let's analyze the simplest, craziest possible reactor, which would be a molten salt fueled reactor. It's just a blob of 700 Celsius goo that's got its fuel, coolant, and control rods all built in. So if we assume that the reactor is homogeneous, which is a pretty good assumption for molten salt fueled reactors, because the fuel's dissolved in the coolant. And it builds up its own fission product poison. So it's got some of its own control rods kind of built in. Usually, we'll have other extra ones too, but whatever. Then we can start to really simplify things. If we get rid of any homogeneity assumptions, we cannot necessarily get rid of the r in the flux, because even if the reactor's homogeneous it still might have boundaries. So you might be able to approximate it as just a cylinder or a slab of uniform materials. But if we were to get rid of the r's in the flux term, that would mean that as we graph flux as a function of distance, it would look like that, including infinitely far away from the reactor. Now, is was that true? Absolutely not, so I don't want to leave that up for anyone. We'll fill in what these graphs look like a little later, just leave them there for now. We can get rid of some of the other r's though, like these cross sections. If the reactor is actually homogeneous, then the cross section is the same everywhere because the materials are the same everywhere. So we can get rid of the r's here, the r's here, and there, and there. And that's it, I think. I don't think I missed any, good. Next up-- if this reactor is homogeneous, then does it really matter at which location we're taking this balance? Does it really matter which little volume element we're looking at? We say these equations are-- we'll call them volume identical, which means if this same equation is satisfied at any point in the reactor, we don't need to do the volume integral over the whole reactor. It's not like it's going to change anywhere we go. So forget the volume integrals. Hopefully, you guys see where I'm going with this. And I've never tried teaching it like this rainbow explosion before. But I'm kind of excited to see how it turns out. So already like 2/3 of the stuff that we had written are gone. What's the only variable left that we can go after? What's the only color left that I haven't really used? Energy, so we can make a couple of different assumptions. This equation as it is not yet really analytically solvable, because a lot of these energy dependent terms don't have analytical solutions, or even forms like the cross sections. But we can start attacking energy. Hopefully, this is different enough from white. Yeah, is that big enough difference for you guys to see? Good, OK, we can start doing this in a few different ways. I want to mention what they are. And then we're going to do the easiest one. So the way it's done for real, like in the computational reactor physics group, is you can discretize the energy into a bunch of little energy groups. So you can write this equation for every little energy group, and assume that along this energy scale, ranging from your maximum energy to probably thermal energy-- 025, let's do this clearly with thick chalk. There we go. You can then discretize into some little energy group. Let's say that's egi, that's egi plus 1, and so on, and so on. And depending on the type of reactor that you're looking at, and the energy resolution that you need, you choose the number of energy groups accordingly. Does anyone happen to know for a light water reactor, how many energy groups do you think we need to model a light water reactor? The answer might surprise you. It's just two, actually. All we care about-- so let's say this would be for the general case. All we care about for a light water reactor is, are your neutrons thermal? Or are they not? Because the neutrons that are not thermal are not contributing to fission that much. They are just a little bit. And you can account for those. But pretty much, they're not. Once the neutron slowdown down to get thermal, in the range from, let's say, about an EV to that temperature-- took a surprising amount of time to write with sidewalk chalk-- then you've got things that are about 500 or 1,000 times more likely to undergo fission. And so all you care about is the neutrons are all born. They're all born right about here. And they scatter, and bounce around. And you don't care, because they're just in this not thermal region. And when they enter the thermal region, you start tracking them, because those are the ones that really count for fission. And if you actually look up the specifications for the AP 1000, this is a modern reactor under construction in many different places in the world. When you see, how do they do the neutron analysis? Two group approximation. So this isn't just an academic exercise to make it easier for sophomores to understand. This is actually something that's done for real reactors. So if you ever felt like I'm making it too simple, no, no, no, I'm simplifying it down to what's really done. And I will get you that specification so you can see what Westinghouse says, like, this is how we design the reactor. We made a two group simplification in many cases. So you can discretize. You can forget it, which we're going to call the one group approximation. Or you can-- let's say two group is the other one that we're actually going to tackle. We're going to do this one, forget energy. But we're not really going to forget energy, because you can't just pick an energy, and pick a cross-section, and say, OK, that's the cross section we're going to use. If most cross sections have the following form-- if this is log of energy, and this is log of sigma, and it goes something like that, what energy do you pick? Go ahead. Tell me. Which energy do you pick? Anyone want to wager a guess? AUDIENCE: The ones before or after the big squigglies. MICHAEL SHORT: The ones before or after the big squiggles. I don't think that's correct, because if you do it this, then you're going away underestimate fission. If you do it here, you're going to way overestimate fission, or whatever other reaction you have. We didn't say which reaction this is. The rest of you who are silent and afraid to speak up, you're actually correct. I wouldn't actually pick any single value here. What you need to do is find some sort of average cross-section for whatever reaction that accurately represents the number of reactions happening in the system. And in order to do that, you have to come up with some average cross-section for whatever reaction you have by integrating over your whole energy range of the energy dependent cross-section as a function of energy, times your flux de over-- does this look familiar from 1801 or 2 as well, what's the average value of some function? Little bit? Well, we'll bring it back here now. So retrieve it from cold storage in your memories, because this is how actual cross sections are averaged. For whatever energy range you're picking-- I'm going to make this a little more general. I won't say zero. I'll just say your minimum energy for your group. So now, this equation is general for the multi group and the one group and two group method. For whatever cross-section you want to pick and whatever energy range you're looking at, you take the actual data and perform an average for the fast and thermal delineation, where, let's say this is fast and this is thermal, you would have two different averages. Maybe this average would be right there. You know what? Let's use white so it actually has some contrast. So this would be one value of the cross-section. And maybe the next average would be right there. So you simplify this absolutely non analytical form of your complicated cross-section to just a couple of values. Maybe we'll call that average sigma fast. And we'll call that average sigma thermal. So using this analogy and this color, we can then say, we're going to take an average new, an average chi, get rid of the energies, because we can perform the same energy average integration on every quantity with energy dependence. So all we do is we put a bar there, ditch the energies, ditch the energies. And let's just say that flux is going to be what it is. Same thing here-- yeah, same thing there, and there, and there, there, there, there, here, and here, and here. And there is a cross-section. There is an energy. There is an energy. There is a cross-section. We don't care about those anymore. And there's a couple of other implications of this energy simplification. What is the birth spectrum now? What's the probability that a neutron is born in our energy group which contains all energies? 1, OK, so forget the chi, and that one, and that one. And what about this scattering kernel? What's the probability that a neutron scatters from any other energy which is already in our group into our group, which contains all energies? AUDIENCE: 1. MICHAEL SHORT: Yeah, scattering no longer matters when you do the one group approximation, because if the neutron loses some of its energy, it's still in our energy group, because our energy group contains all energies. So forget the scattering kernel. And forget the energy integrals. What are we actually left with? Not much. There's no green in here yet. Good, because I need to do one more thing. There is no more green. Oh, we did green. We did time. OK, green, red, orange-- is this the orange I used? Dammit. OK, we use those. Purple, no, we've used it. Oh my God. We've used both blues. Bright yellow. Yeah? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yes. Chi is the fission birth spectrum, the probability that a neutron is born at any given energy. But because all neutrons are born in our energy group, which contains all energies, then that just becomes one and goes away. There's no birth spectrum, because they're just born in our group. Does that makes sense? OK, I think I found the actual only color, besides black on a black chalkboard, and white which we already have, that I have left. We also have a slightly darker shade of gray. But I'm literally out. This worked out awesome, because there's one more thing that we want to deal with. What do we even have left? All right, what is the one term that is not in all of the same variables as the others? That current, that j. What do we do about that? So we're going-- sorry? AUDIENCE: The F e to e. MICHAEL SHORT: The F e to e-- so, actually, I'll recreate some of our variables here, because there's a lot of them. So our F of e prime to e is what's called the scattering kernel. And that's the probability that a neutron scatters from some other energy group, e prime, into ours in e about de. And chi of e is the fission birth spectrum. And just for completeness, knew of e is our neutron multiplier, or neutrons per fission. And I think that gives a pretty complete explanation of what's up here. So now, let's figure out how to deal with the current term. This is when we make one of the biggest approximations here, and go from what's called the neutron transport equation, which is a fully accurate physical model of what's really going on, to the neutron diffusion equation. And this is where it gets really fun. You don't assume that neutrons are subatomic particles that are whizzing about and knocking off of everything else. You then treat the neutrons kind of like a gas, or like a chemical. And you just say that it follows the laws of diffusion. Again, this works out very well, except for places where cross sections suddenly change, like near control rods or near fuel. But for most of the reactor, especially if we have a molten salt fuel reactor, we can invoke what's called Fick's law. Does this sound familiar to anyone? Fick's law diffusion, 3091 or 5111. It's the change of a chemical down a density or a concentration gradient. So, yeah, you've got the idea. What Fick's law says is that the current-- or let's say the diffusion current or the neutron current-- is going to be equal to some diffusion coefficient, times the gradient of whatever chemical concentration you've got. Let me put the c in there. So right here, this would be the current. I'll label it in a different color. This would be your variable of interest. Maybe c is for concentration, or phi is for flux. Oh, that reminds me, where are those bars on our flux? Which term did we do? Energy-- where's my slightly lighter blue over here? All of these phi's become capital, because we've gotten rid of all the angular, and energy, and everything dependence. Oh, angular dependence-- neglect omega, that should be dark blue. Omega goes away. And the fluxes become capital. So many terms to keep track of. Luckily, you will never have to. And then the j becomes a capital J. Did I miss any phi's here? No, because that one was already gone, cool. All right, so we can use Fick's law, and transform the current into something related to flux. And what we're saying here is that we're getting rid of the true physics, which is that there's some fixed neutron current. And we're saying that neutrons behave kind of like a gas, or a chemical in solution. And so in yellow, we can ditch our current related term, and rewrite it. We don't have any integrals left, as negative del squared phi. I think the only variable left is r, not too bad. Now, we have a second order linear differential equation describing the flow of neutrons in the system. And so we actually have something that we can solve for flux. I think it's time to rewrite it. Wouldn't you say? This has been fun. So let's rewrite what's left. Make sure you guys can actually see everything there. We'll write it in boring old white. So we have no transient dependence. We have left sigma fission, times flux, as a function of r. No source, and we have our neutron nin reactions of-- oh, we forgot our new sigma fission. Then we have our i sigma fission from nin, times flux. Next term, we have photofission. So we have a new, from gamma rays, times sigma fission from gamma rays, times flux. Next up, we have-- well, last simplification to make. We have scattering. And we have total cross-section. When we said, forget about energy, and our scattering kernel becomes one-- and that's light blue-- got to make one more modification to this board. Do we care about scattering at all anymore whatsoever? Because scattering doesn't change the number of neutrons left. So we can then take these two terms and just call it sigma absorption, times flux, because if we take scattering, minus the total cross-section, it's like saying, all that's left if you don't scatter is you absorb. And if you remember, I'll add to the energy pile, we said that our total cross section is scattering, plus absorption. And absorption could be fission and capture. And capture could be-- let's say, capture with nothing happens, plus these nin reactions, plus any other capture reaction that does something. So we're going to use this cross-section identity right here with a couple of minus signs on it. And say, well, scattering minus total, leaves you with negative absorption, to simplify terms. I'll leave that up there for everyone to see. So then we have scattering and total just becomes minus sigma absorption, times phi of r. And we're left with-- what was that? Current, that becomes plus. There is a d missing in there, isn't there? A yellow d, minus d. OK, and that's it. Yes? AUDIENCE: What is d? MICHAEL SHORT: d is the diffusion coefficient right here. So we're assuming that neutrons diffuse like a gas or a chemical with some diffusion coefficient. And so we'll define what that is, oh, probably next class, because we have seven minutes. Yeah, Luke? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Uh-huh. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: The c right here, that's whatever variable we're tracking. So let's call that flux. Or let's call it n, the number of neutrons, because flux is just number of neutrons times velocity. So let's say that the concentration was the concentration of neutrons. And we just multiply by their velocity to get flux. So it's almost like we can say that the concentration of neutrons is directly related to the flux. And that way, we have everything in flux. And that's the entire neutron diffusion equation. Yeah, this is for one group with all the assumptions we made right here, homogeneous. What other assumptions did we make? Steady state, and we already neglected that. And I think that's enough qualifiers for this. But it's directly from this equation right here that we can develop what's called our criticality condition. Under what conditions is the reactor critical? So in this case, by critical, we're going to have some variable called k effective, which defines the number of neutrons produced over the number of neutrons consumed. And if k effective equals 1, then we say that the reactor is critical. That means that exactly the number of neutrons produced by regular fission, nin reactions, and photofission equals exactly the number of neutrons absorbed in the anything, and that leak out. So let's relabel our terms in the same font that we did here. So this would be the fission term. This would be nin reactions. This would be photofission. This would be absorption. This would be leakage. How many neutrons get out of our finite boundary? And if you remember when we started out, we said we were going to make the neutron balance equation equal to gains minus losses. And through our rainbow explosion simplification, we've done exactly that. These are your gains. These are your losses. When gains minus losses equals zero, the reactor's in perfect balance. Yep? AUDIENCE: How does leakage come out to be negative? MICHAEL SHORT: Leakage comes out to be negative, despite the plus sign here. And that's actually intentional. That's because neutrons traveled down the concentration gradient. So let's say we're going to draw an imaginary flux spectrum that's going to be quite correct. And I'm doing all of those features for a reason. But let's look at the concentration gradient right here. Leakage is positive when your flux gradient is negative. That's why the sign is flipped right there. So a positive diffusion term means you have neutrons leaking out down a negative concentration gradient, because if you look at the slope here, the change in x is positive. And the change in flux is negative. So the slope is negative. Concentration gradient is negative. That's why the sign is the opposite of what you may expect. And the same thing goes for chemical, or gaseous, or any other kind of diffusion. I'm glad you asked, because that's always a point of confusion, is, why is there that plus sign? That's intentional And that's correct. Cool. Yeah, Shawn? AUDIENCE: So in that case, if you were to explicitly right out losses, would it be minus absorption, plus leakage? MICHAEL SHORT: Let's put some parentheses on here, equals zero, and a minus. And when we say plus leakage, we have that plus sign in there. So I'm not going to put any parentheses up here, because that wouldn't be correct. But what I can say is that gains minus losses have to be in perfect balance to have a k effective equal to 1. Does anyone else have any questions, before I continue the explanation? Cool. Let's say you're producing more neutrons than you're destroying. That's what we call supercritical. So I just did an interview for this K through 12 outreach program. And they said, should people be afraid when something, quote unquote, goes critical? Sounds scary emotionally, right? And the answer is absolutely not. If you're reactor goes critical, it's turned on. And it's in perfect balance. That's exactly what you want. So going critical is not a scary thing. It means we have control. If something goes supercritical, it doesn't necessarily mean it's out of control. Reactors can be very slightly supercritical and still in control, because of what's called delayed neutrons, which I will not introduce today, because we have two minutes. If a reactor has a k effective of less than one, we call that subcritical. So it's important to note that the nuclear terminology that's kind of leaked out into our vernacular is not physically correct, in the way that it's used. Words like critical are used to incite emotions, and bring about fear. When to a nuclear engineer critical means, in perfect control, in balance, like you would expect, or in equilibrium. That all sounds kind of nice, makes you calm down a little bit. Yeah, so we can put one last term in front of our criticality condition. We can take either the gains or the losses, move the equal sign and zero over a little bit, and put a 1 over k effective here. This, then, perfectly describes the difference between the gains and the losses in a reactor. So if the gains equal the losses, then k effective must equal 1. And the reactor has got to be in balance. If there are more gains than losses, which means if you are producing more neutrons then you're consuming, than k effective must be greater than 1 for this equation to still equal zero, because this equation must be satisfied. So if you're making more neutrons, your k effective has got to be greater than 1. So you have a less than 1 multiplier in front. And on the opposite side, if you're losing more neutrons then you're gaining, your k effective has to be less than 1 to make this equation balanced. Going along with all these definitions right here. So it's exactly 5 of 5 of. I've given you delivered promised blackboard of Lucky Charms. And we've hit a perfect spot, which is the one group homogeneous steady state neutron diffusion equation, from which we can develop our criticality conditions and solve this much simpler equation to get the flux profiles that I've started to draw here. So I want to stop here, and take any questions on any of the terms you see here. Yeah? AUDIENCE: Didn't we talk a lot about the different energies, like the one-group, two-group, or the discrete distributed discretized energy groups? So when we're doing the one group, you're actually just treating them fast together? MICHAEL SHORT: We are. That's right. AUDIENCE: To know that, like you said, the reactors do two group in the actual analysis. MICHAEL SHORT: So a lot a lot of reactors, at least thermal reactors where you only care if neutrons are thermal or not, two group is enough. When you have a one group or a two group equation, these are fairly analytically solvable things. You get to any more groups than that, and, yes, they're analytically solvable. But it gets horrible. And that's why we have computers to do the sorts of repetitive calculations over and over again. Once we've solved the one group equation, I'll then show you intuitive ways to write, but not solve, the equations for multi group equations. Sure. Any other questions? So like I promised, we didn't stay complex for, long because there's basically nothing left. Yeah? AUDIENCE: What is the 2n over 2t? Are we saying that? MICHAEL SHORT: Oh, that's a partial derivative. Yeah, there we go. So this is saying a change in neutron population, or the partial derivative of n with respect to t, because n varies with space, energy, angle, time, and anything else you could possibly think about, equals the gains minus the losses. I think this is worthy of a t-shirt. If any of you guys would like to update the department shirts to properly take into account photofission external sources and nin reactions, I think it would make for a much more impressive thing, because we kind of printed an oversimplification before. It's too bad. We definitely had room on the shirt. There was room on the sides, and on the sleeves. Yeah, keep going. It might have to be long sleeve. I think that would be pretty sweet. Yeah, OK, if no one else has any immediate questions, you'll have plenty of time tomorrow, because the whole goal tomorrow is going to be to solve this equation. That's only going to take like 20 minutes. So we can do a quick review of the simplification of the neutron transport equation, solve the neutron diffusion equation. If you have questions, we'll spend time to answer them there. And if you don't, we'll move on to writing multi group equations. And also Friday for recitation, it's electron microscope time. So now that you guys have learned different electron interactions with matter, you're going to see them. So we're going to be analyzing a couple of different pieces of materials that a couple of you are going to get to select. And we're going to image them with electrons to show how you can beat the wavelength of light imaging limit, like I told you before. We're going to produce our own X-ray spectra to analyze them elementally, where you'll see the bremsstrahlung. You'll see the characteristic peaks. And you'll see a couple of other features that I'll explain too. So get ready for some SEM tomorrow. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 31_Frontiers_in_Nuclear_Medicine_Where_One_Finds_Ionizing_Radiation_Background_and_Other_Sources.txt | ANNOUNCER: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: Hey guys. So quick announcement, we're not doing nuclear activation analysis today, because the valve that shoots the rabbits into the reactor broke and needs to be repaired. So we'll likely just do this next Friday. And instead, we'll have the whole recitation for exam review, like I think we'd originally, originally planned. I also want to say thanks to whoever said, please write lecture notes for this class. It's something I think needs to be done. And I just now biked back from meeting with a publisher, or a potential publisher, to actually get this done. Because as I think as you guys have seen, there's no one reading that does this course justice. There are some that are too easy, there are some that are too hard. And there are giant pieces missing, like most of what we're going to go into today-- sources of background radiation. And where do cosmic rays come from, and what are they? So thanks to whoever said that, because it spurred me off on a let's make this into a book kind of thing. Want to quickly review what we did last time. We went over all the different units of dose and radiation exposure, from the roentgen-- which is pretty much just valid for those measurements in air. And I realized that yesterday I brought in the civil defense dosimeter and passed it to one person, and we didn't continue bringing it. So I'll bring it to recitation. It's not in my bag. There is the unit of the gray, which is just joules per kilogram. Which you can calculate from stopping power or exponential attenuation equations. This is where you start, because it's how you get from the world of physics to the world of biology. Into units of increased risk, or sieverts, which is just gray times a bunch of quality factors, which are either tabulated, or I'd like to see somewhat better calculated. They are calculated sometimes, but by empirical relations, because that's usually good enough. Biomeasurements tend to be pretty sloppy, and I'm not that upset that these are empirical relations. I'm going to skip way ahead past the detector stuff. We didn't quite finish up the IF2D idea. I think this is where we ended last time, where we were talking about how do you detect dose during cancer treatment. And I was outlining one proposed way that we're thinking of. Using this F-center based dosimeter, which changes color when it gets irradiated, you can implant it in the tumor. And as it moves from things like breathing or swallowing, you could feedback to the proton beam, and only irradiate when it's in range. Or play nuclear operation, and try not to hit the sides. However, whatever you want to do. And there's multiple implantation options for this. We've thought about things like implanting a fiber optic cable into the tumor, and then having a port on the side of you that could do some in body spectrometry, which would be pretty cool. Could also put it all in a chip. You could have the emitter-- either a broadband or single color LED-- the F-center, and the spectrometer all in a chip that's implanted. And with radio frequency power transfer, so you don't need to put a fiber optic port and plug it into the side of you. Or however it goes. And what we're doing next in this development is nailing down what color change is given by what dose-- so the physics. Develop an on-chip version. Find a bio-compatible casing. We've done the IP part, which is pretty cool. As you can see with patent office at least has taken in the application. But it's neat that you need to know pretty much everything you learned at MIT to pull off a project like this. From the nuclear physics stuff, to the material science, to the 22.071 electronics, to the medical stuff for biology, to the financial stuff for econ. In order to pull off an actual nuclear start up project like this, you need everything you learn here. Which is kind of a neat case study. But now let's get back into what does a sievert really mean in terms of increased risk? Usually means the increased risk of some sort of long-term biological effect, whether it's cancer or some other genetic effects-- let's say mutations-- anything that would take a while to manifest, and would manifest by slow but steady cell division. And if you notice the difference between adults and whole population, let's see-- yeah, sorry. You can see right here that these-- not very much dose. Let's say in the realm of 10 to the minus 2 sieverts can give you some increased risk for cancer or some other effect. So we're talking on the realm of 40 millisieverts or so would give you some additional cancer risk. That's not a lot of dose. That's up to about the limit of the occupational dose that you're allowed. So when we talk about how much is too much, I've taken some excerpts from this Committee on Radiation Protection document. This is from Turner, but the entire document, as I mentioned, is up on the learning module site. So you guys can see the actual verbiage where this is defined. So your lifetime dose should never exceed in tens of millisieverts the value of age in years. Which means in some years if you get a little bit less dose, you can get a little bit more dose and still be considered safe or not have any appreciable increased risk. And while you're working, you should never get more than about 50 millisieverts in a given year. How was it for radiation workers? So for you guys, what dose are you allowed per year working at the reactor? AUDIENCE: About five rem per year. MICHAEL SHORT: Five rem per year, which is 50 millisieverts. Awesome, so it's-- AUDIENCE: This is the breakdown to your eyes and to specific organs. We mostly care about the [INAUDIBLE].. MICHAEL SHORT: Oh sure, yeah. To jump back to that, if you're saying that there's actually tabulated differences for different organs, that's where they come from. Let's say you can take less radiation to the same organ and get the same dose in sieverts measured in equivalent risk. So I wouldn't be surprised if these are the organs that you're not allowed to irradiate as much. AUDIENCE: The eye is the only one they have restrictions on. MICHAEL SHORT: I'm surprised the eye isn't here. Does it say retina anywhere? Oh, OK. Well that's just one table. It's not necessarily the complete answer. Yeah, so 50 millisieverts isn't that much. Although if you think of it in the old xkcd units of banana equivalent dose, eating a banana gives you about 0.1 microsieverts. So you would have to eat 50,000 bananas in a year-- no, I'm sorry, 500,000 bananas in order to incur that. Well yeah, I was talking to someone at dinner last night about the banana burning experiment where we measured the activity in becquerels. And then we can calculate how much dose in sieverts it would take. And he said, how many bananas would it take to get some increased cancer risk? About double this. It would take about a million bananas to give you about 100 millisieverts. And I said, you know what else it would cost? And he goes, 100 millisieverts. No shit. And I said, that's right. Yeah. Yeah, he totally didn't plan that, but it just worked out that way. Yeah. And then how much is too much? Let's say for the general public for a background for excluding things like natural background and medical exposures, you're not supposed to get about more than one millisievert just walking about outside. And you don't tend to get that much more. Why are medical exposures not included, despite them being pretty radioactive procedures? Yeah? AUDIENCE: Because they're very targeted. MICHAEL SHORT: They are targeted, and so they could give a lot of dose to certain organs. But the amount of dose isn't necessarily why we don't count medical procedures. Anyone have any idea? Yeah. AUDIENCE: Was it because usually you wear a lead vest if you're getting an X-ray? MICHAEL SHORT: In some cases, like if you go to the dentist, you'll get a lead X-ray. But let's say you get a chest X-ray. Why don't we care that a chest X-ray is way more than you get? Because these things tend to save lives. So you're absolutely willing to get extra radiation exposure that may have a delayed effect if the immediate effect is to save your life. You had a question? Or no. OK. Yeah, so we don't count medical things because chances are you're doing them to improve or save your life. So what's a little bit of radiation compared to let's say finding the blood clot or the aneurysm or whatever it would take? And then how much is enough? AUDIENCE: Yeah, that's the table. MICHAEL SHORT: This is the table that you're familiar with? Yeah. They actually talk about the lens of the eye. And that's a heftier dose. But also, the lens of the eye is not a very massive organ. So this would mean do not stick remaining eye in neutron beam, right? Or you've seen that sticker, do not stare into a laser with remaining eye. Yeah, the same goes for the neutron beam ports coming out of the reactor. But the lens of the eye can take a fair bit more dose per unit mass than the whole body. The lens of the eye is not a particularly fast developing tissue. It can cloud up with an insane amount of radiation exposure. That would take a lot more than 150 millisieverts to do, though. And then things like 500 millisieverts for skin, hands, feet. Pretty much just groups of muscle, bone, and dead skin that not much is going on biologically. Blood's flowing through it, but that's about it. And notice that the regulations do differ a little bit, but on the whole, they're fairly similar. Same for the eye, same for the feet, same for the year. Cumulative is a little different. This says 10 millisieverts times age. This allows you a little bit more. Whichever recommendations you follow, they're all pretty similar. And our knowledge of how much dose leads to how much risk hasn't changed a ton in the last decade or so. There's been all sorts of arguments for or against it. Has anyone heard of this LNT or Linear No Threshold model of dose versus risk? This is something we'll talk a lot about on the last day of class. This is the theory that the amount of risk versus the amount of dose is linear. And no threshold means that every little bit of dose gives you additional risk. This is not supported very much by science. I'd say it's not supported by science. The converse argument is also not supported by science. We just don't have the statistics at super low doses to say what happens. But the official recommendation is that there is a unit of dose that we define as nothing, and it's 0.01 millisieverts-- about 100 bananas-- per event, let's say-- yeah, where does it say-- yeah, per source or practice. So eating 100 bananas in one sitting is considered to give you zero additional risk according to the official guidelines. So the guidelines put in place do not follow the linear no threshold model. But anyone that would claim that one or the other model is absolutely correct has either got a huge sample size of people that we don't know about, or is probably extrapolating beyond what the data will tell them. So you'll see this argument flaring up quite a bit. For the last day of class we'll have you read some arguments for and against the linear no threshold model that aren't just blogs on the internet, but they're actual published articles that have passed peer review. And it's really hard to tell exactly what's going on at low doses. But meanwhile, let's focus on what you do get that we can measure. The actual contribution from background levels is about 50% radon. This is a natural decay product of radium. It's just everywhere. It's here right now. It's all through the atmosphere. This is why you want your basement to be rather well ventilated, because it's a heavy noble gas that accumulates down in unventilated basements. So you don't actually want your house to be sealed up super tight, because you can have radon accumulation. Especially if you happen to live near a granite deposit or on granite bedrock-- which everyone in New Hampshire does-- radon levels are a little higher. There's actually a story about a guy that used to work at a nuclear plant somewhere in Pennsylvania. I don't know where, but he lived on top of a pretty good granite deposit that was a few parts per million radium more. So the radon levels in his house or in his basement were much, much higher than would normally be allowed for background. And this guy used to set off the radiation alarms coming into work. Then he would breathe the nice clean radiation free air in the plant and go out without setting off the alarm. Eventually something had to be done. I don't actually know what was done, but if any of you can find that original story, that would be pretty cool. Cosmic rays, another source that you can't shield from, is about another 10%. And we'll talk a lot about where these things come from. Terrestrial radiation, well we'll count that as stuff in the soil, stuff in the cinder block. Wood happens to be a fairly radioactive substance pound for pound, but it's not very dense compared to things like brick or-- well, banana ashes is probably about the same as wood. Internal, coming from you. You'll have this on problems at number eight. Because you all now know your internal radioactivity thanks to going to environmental health and safety. Did anyone see anything disconcerting in your spectra, other than a tiny little potassium peak? Anyone? I'd say, ah, it's too bad. But that's great, especially for you guys that work at the reactor. Medical X-rays. It's assumed that everyone gets a couple of these a year. You all go to the dentist and look for cavities. You break something, you may get an X-ray of your hand or foot. And this is let's say an average amount of medical X-rays. Then a little bit of stuff leftover, consumer products. This isn't counting things like Fiestaware, those orange glazed plates and bowls that were painted with a uranium based neon orange paint. So we also don't tend to drink from Revigators anymore. Has anyone ever seen or heard of a Revigator? AUDIENCE: Yeah, was this in the '50s or '60s? MICHAEL SHORT: Or even earlier, yeah. So back in the '20s, people would put radium ore in these containers and pour water in it and say, natural radioactivity gets in. It cures croup or the Jimmy legs or astigmatoid rheumatism, or whatever other quack diseases there were in the '20s. You can still find them on eBay, and that's not accounted for in today's consumer products. But this all accumulates the amount of dose that you tend to get on your own. You might get a couple of millisieverts a year of dose just from background, especially depending on where you are. And the big one-- whose spectrum I think you're all familiar with by now-- is that of radon. Because we saw most of these peaks in the bananas. Anyone have any idea why you would find so many radon decay products in bananas? Given that radon's everywhere, we did notice elevated levels of specifically bismuth 214 and actinium I think 228 was the isotope we saw. Where would those come from? The what? AUDIENCE: The soil. MICHAEL SHORT: Absolutely, yeah. Whatever radium's coming out from the bedrock, that radon has to come up through the soil. If that happens to decay in the soil, it makes lead, bismuth, polonium, other heavy metals that are taken by the plant's tissues. In addition, radon daughter products can plate out on the leaves themselves. So this is one of those reasons that smoking is such a bad thing to do. Aside from the chemical effects, you have giant fields of high surface area tobacco that you then concentrate and dry up into these tiny little sticks. You have an enormous amount of leaf surface area and dry vegetation that has taken all these radon daughter products. So most the dose you get from smoking is lead, bismuth, polonium, actinium, radium. Alpha emitters. As we saw last time, to remind you guys of the quality factor for alpha particles, it's as big as it gets. Anyone remember why that is? Why are alphas so damaging if they get into your tissues? AUDIENCE: Because they're big. MICHAEL SHORT: They are big, so they have high mass. And? AUDIENCE: Short range. MICHAEL SHORT: Short range coming from their high relative charge. They have quite high stopping power. And they will deposit a lot of energy very close to where they are emitted. So their range is very small. So that armor piercing bullet analogy comes to just an armor piercing bullet that explodes right out of the barrel of the gun. And they do quite a bit of radiation damage. We jump back to the right slide without inducing a seizure. I think we've looked before at the radon decay chain. This is a simplified version, because there are different branches or different possibilities for decay, but some of them have extremely low percentages. So this one's simplified quite a bit. But it is whenever radon decays, it gives off a bunch of alpha and beta emitters that last anywhere from minutes or seconds to days and so. For every radon atom that you absorb, you end up getting four or five alphas and betas, depending on how long it stays into your system. And then mapping out radon in the US. You have quite different amount of radon dose depending on where you are. And I wanted to skip ahead and overlay a couple of maps of the US. So this is a terrestrial-- oh, I'm sorry, that's the wrong one. Let's just look at this one, yeah. Anyone notice any patterns here? Where do you tend to get the most radon? What sort of features would one live near when you get a lot of radon dose? AUDIENCE: Mountains. MICHAEL SHORT: Mountains, which tend to be made of? AUDIENCE: Rocks. MICHAEL SHORT: Rocks, which tend to contain a lot of radium. Especially granite and other such rocks. The Conway granite, named after up in Conway, New Hampshire, is about 52 parts per million uranium or radium. So it's a fairly toasty rock. You can actually tell there with your Geiger counter, if its count long enough, that there is a little bit of radioactive ore in that Conway granite. Not nearly enough to matter at all, and certainly not enough to stop you from making fancy kitchen countertops. But I wonder if folks would buy those if they knew that there was 52 parts per million of something with a half life of 9 billion years. I somehow think it would matter to people, but it really doesn't. AUDIENCE: They put the radiation symbol on it [INAUDIBLE] MICHAEL SHORT: Yes, engrave that. If they made the radiation symbol a little induction stove, that would be pretty slick. Yeah. And then in terms of relative radon risk, how much actually matters? I like this graph. Despite being difficult to read, it actually shows how much the average indoor level compares to all the different things you could do. Like getting 2,000 chest X-rays per year versus 100 times the average indoor level. That's about what I heard that fellow in Pennsylvania had is like 100 or 80 times the normal radon levels from living underground on this giant granite deposit. It's like getting 2,000 chest X-rays per year, or something like smoking four packs a day. I know some people that do this. They don't tend to be that afraid of the radiation that they're taking in from smoking. But yeah, it's pretty insane how much radiation you get. People are afraid to get one chest X-ray, which is not even on this map. One chest X-ray worth of radiation gives you much, much less than living for a year in a house, which we all tend to do. Then if you live in a brick or cinder block house, you actually get a fair bit more radiation, because these are fairly radioactive building materials. And I'll show you those activity levels in just a sec. As far as exposure sources, again, to look at the relative amount of terrestrial gamma ray exposure, you can correlate that pretty well with not green regions on a topological map. So look at the really low levels down here, correlates with low lying vegetative areas around here. Colors are a little more extreme on my screen, but you can see this is all low lying right here. From Louisiana, Florida, up the east coast, until you get to the Appalachian Mountains and such. So pretty striking correlations there. And it all comes from what we call the primordial nuclides. These are unstable nuclides that have such long half lives that they still exist, despite the universe being 15 billion years old, or whatever supernova formed our solar system being five plus billion years old. Things that we've already studied almost to death, like potassium-40. And you can see that's about 0.011% of all potassium is radioactive potassium-40. About 10% of which gives off gammas in-- what is it, 90/10? I forget the split. But can give off either betas and then a gamma, or positrons and then a gamma. Then things like rubidium, in the same column as sodium, potassium, and cesium, so it behaves kind of like a salt-like element. There should be some for-- are there any for chlorine too? Some of the other important ones to note, oh, platinum. Does anyone has any platinum jewelry? Does anyone have any platinum jewelry? Good answer, yeah. I don't either. I teach at a college. Also, I don't like wearing jewelry. But there'll be all sorts of these primordial nuclides that you can't really do anything about. They're just there. Note that the half lives are really, really long. And as you know now, the half life is going to be inversely proportional to the activity. Despite almost all indium-- look at that, 95% of the indium that you'll find emits betas. Doesn't stop people from using it as these awesome glass to metal seals for vacuum components. Because once in a while it might emit a beta ray, like a whole gasket might emit a beta once every millennium or so. But these half lives actually are measurable. And that begs the question, are there elements with longer half lives that are just too long to measure? You think what does it mean to have a half life of 10 to the 15 years, given that the universe is on the order of 10 to the 10 years old? Is it possible that all nuclei will decay till the end of time? I've seen some documentary such things and don't call this a science that say, oh 10 to the 40 years from now, the last protons decay, or the last whatever elements decay into all protons and neutrons. Don't know if that's true, but it does make me wonder, are some of the other so-called stable elements just have ridiculously long half lives? It's something to think about. So let's take a look at the building materials, and see what's in the typical things around us right now. You can see how much granite, radioactive thorium, and potassium are in these building materials. And how many usually is measured in picocuries per gram. A picocurie is already less than a becquerel per gram. Because a curie is what, 3.7 times 10 to the 10 becquerels. And a pico is a 10 to the minus 12. So things on the order of a few picocuries per gram, a gram of that material might emit one disintegration per second. Not a lot of radiation. But take a look at how much potassium there is in granite. Nanocuries per gram, that's getting into the integers worth of becquerels. If you look at wood, check that out. Anyone heard of potash before? It's one of the ways we get potassium. So if you take wood based things and you burn them in a fire, you drive off all the carbon in the water, you're left with these kind of whitened salt and pepper ashes. That's potash. That's the ashes left over in the pot after you burn stuff in a fire. They're quite potassium rich. So I think what we'll do next year, instead of burning a bunch of food, is just burn a bunch of wood. We'll have a nice bonfire, collect the ashes. And we can see how much potassium there is in wood. Which pound for pound, if you see on this list, is the most radioactive building material there is. Just that wood happens to be pretty inexpensive, and consists mostly of water, lignin, and other carboniferous materials. They don't have carbon-14 on this list, but let's take a look at some of the other ones. So sandstone cement has a pretty toasty signature to it. Gypsum drywall. What is it, 13 parts per million uranium in all the drywall you tend to find. Anyone scared yet? Because you shouldn't be. It's like I mentioned before, this is the slide I want to show to most folks in the general public. There is such a dose as nothing. And that's pretty much what you get from, let's say, a day's worth of being around these building materials. It's just about nothing. It's after the building materials. There we go. Seawater is another great source of radioactivity. Enough so that people have actually proposed harvesting uranium from seawater. So the total amount of activity in the ocean, there's something like 11 exabecquerels of radiation contained in the uranium in the seawater. Which means you should be able to have a gigantic trawler flying out around in the ocean, and just floating through the seawater, picking up atoms of uranium here and there. Because technically, there's enough to power the world for like 2,000 years. The problem is the ocean is big. It doesn't stop people from actually working on it. So it's neat to think that there's a whole lot of carbon-14 and tritium and uranium in seawater. 300 picocuries per liter. Anyone have any idea why there's so much more potassium activity than anything else in seawater? It's because it's salty, yeah. And potassium's just like sodium. So there'll be a fair bit of potassium in the seawater, 0.0117% of which is potassium-40. And so last year, people asked, what, uranium from seawater? How is that even possible? So this is the part in the course where I'm going to pull out a lot more recent papers to show you some of the cooler innovations going on. So you could use this adsorbent. And adsorbent. Does anyone know what adsorption means, not absorption? Adsorption is when something sticks to the surface of something. Absorption with a B is when it actually gets incorporated into the bulk. So folks are thinking about making high surface area materials that can adsorb selectively atoms of uranium. So you send out this huge braided net, or a huge stack of adsorbing material with a D, and just go around in the ocean. Attach them to tankers or cargo ships, and just have them pull in product as they go from coast to coast. And by actually changing around the chemistry and the geometry of this, you can enhance things specifically for uranium by about a factor of three. And this yellowcake right here next to an actual ruler-- that's 50 millimeters right there-- that was obtained directly from seawater. So you can actually pull yellowcake out of the ocean. Just not very much of it. And the way these work is there are interesting compounds that selectively absorb uranium into their structure, not by direct chemical bonding, but by getting something close. For example, my wife tends to study metals bonding to proteins. And it's not necessarily always a full chemical bond like you might think, but the protein can kind of wrap around a metal ion, transport it places. Similar thing going here. I'm not going to pontificate anymore on it because the title has the word organic in it, and I am definitely not an organic chemist. Don't want to tell you anything wrong, but I do want to tell you you can actually see this paper to find out what sort of compounds selectively grab onto uranium. And if there's a seawater floating through, and uranium happens to flow nearby, it can bond to it. You can then somehow squeeze out or burn off that adsorbent, and there you get uranium. Now let's talk about what's in the body and interpreting the spectra that you got from EHS, your full body spectra. If you take a look at how much uranium there is in the body, wow, there's some. Every one of you is got about a becquerel of uranium in the body. But if you look at the relative amounts, rate it was at. The only things that really matter a ton, potassium-40, about 4.4 kilobecquerels per human. So each of you is giving off 4,400 potassium gammas per second. Most of them just flow right through all the other people. And then carbon-14, about 3.7 kilobecquerels. Pretty interesting to know just how much radioactive stuff you have in your own body. You will need this table for homework number eight when I ask you a pretty fun question. Then there's all the various medical procedures. The ones that aren't counted in your annual dose because they tend to save people. But now let's look at the dose in millirem So if you want to figure out what this is in sieverts, just divide by 100. So a regular old chest X-ray, 0.1-- let's see, how does it go? 100 rem in a sievert. Yeah, so about 0.1 millisieverts, or 1,000 banana equivalent dose. Not terrible. Which is why if we go back to that chart of all those relative risks, if you look at the average indoor level, it's like getting-- well, I would have to guess maybe 40 chest X-rays per year based on this rather crude scale. And you're allowed to get around a millisievert or a few millisieverts of radiation per year. That sounds like it checks out mathematically. Let's look at some of the other crazier ones. Dental bitewing. Yeah. So anyone ever bite down on something where you have to get an X-ray through the side of your mouth? Quite negligible amount of dose, yet they still put the lead apron on you. I'm guessing that's mostly for show, because that's very, very little dose. It's well beyond the 0.1 microsieverts of something that counts as nothing. But there's not a lot of dose going into these things. Let's see what really does give you a whole lot of dose. 10 millisieverts for a CT scan, or a whole body CT screening. That's a pretty hefty amount of dose. Right there with one procedure you may get more than your normal annual background dose. But if you're going in to look for stuff, chances are you need to find whatever you're looking for. So we don't count it. And it may give you a slightly higher chance of developing cancer much later down the line when that cell that gets mutated divides. But it's probably going to save your life in the next hour, or the next day. So definitely worth it. Let's see, the worst-- what's the worst procedure we can find? Noncardiac embolization. I don't know enough biology words know exactly what that means. AUDIENCE: I know what noncardiac means. MICHAEL SHORT: Yeah, I think we all know what noncardiac means. Good. This isn't 7012. Nonmedical procedures-- or more medical procedures, I'm sorry. Where's the techneitum scan? Yeah, notice how many of these things have technetium imaging where you'll inject technetium into a certain organ or a certain vascular or lymph or whatever system. Some of these things can give you a fair bit of dose. Like again, maybe a heart stress rest test can give you double or triple your normal background dose. This is why you have to declare to airports if you've just had a medical imaging procedure, because this is well more than enough to pick up on any sort of airport radiation monitor. So again, if you ever get a medical procedure with any sort of radiation, anything, do declare it, because it's quite measurable. Then there's radiation from altitude. I may have mentioned already that the reason that pilots can't fly for a certain number of hours is not fatigue, but it's radiation exposure. When you start to look at how many microsieverts you get per hour on the ground, 0.03. Right about down at that-- hanging around for an hour is a negligible dose. You go up to international air travel, that goes up by a factor of little more than 100. And so you get a fair bit of radiation exposure. If you take your annual allowable occupational limit of 50 millisieverts, divide by 3.7 microsieverts, you're getting close to 86-- what is it? How many hours in a year? Let's see. There's three times 10 of the seven seconds in a year. So divide that by 3,600. That's getting on the realm of 10,000. That's about the conversion factor. So you can't spend your life in the air, because you'd get too much radiation according to the occupational risks. And so actually in addition to some interesting measurements that have been published in papers, we've actually had students go out and build radiation altimeters based on the MIT Geiger counter-- removing the speaker, of course, because you don't want to clicking Geiger counter on a plane. That'd be kind of a stressful situation. Let me show you one example of these. Are also published from a paper, but we have pretty similar data from-- if you want, to go talk to Max Carlson, one of my graduate students who hooked up his Geiger counter to an alarm clock-- the case that had an arduino in it. I think this was a poor choice of case for a plane because it looks kind of like a time bomb. But luckily nobody found it, and he was able to get the data. But you can see just how much more data you get. And you can correlate the height that you're at with the dose-- in this case, microroentgens per hour or microrad depending on-- what is it-- ambiguous unit definition right there. But it's quite noticeable. So for those of you who have built Geiger counters and have cell phones, and don't want to have a fake time bomb, you can actually hook in your Geiger counter to the microphone port of your cell phone. And with a few available radiation apps, you can actually monitor your dose in microsieverts. Assuming that it all comes from gamma rays, which is most of what a lot of cosmic rays will produce. So it's a pretty safe approximation that your dose in gray flying on the plane is also your dose in sieverts, because it's whole body, and its gammas. So I'd say try this at home, kids. This is one of those things I recommend you try out. Speaking of these cosmic rays, where did they come from? Well, this is a question that's been under debate, and was more completely answered just a few years ago. They come from very high energy particles from somewhere. It had been argued, do they come mostly from solar flares, or did they come from elsewhere in the galaxy? Mostly we're talking about things like high energy protons or other charged particles. We're also talking neutrinos. Anyone know about how many neutrinos are theorized to pass through you every second? Trillions. Yeah, something like that. But they basically don't interact with matter. As I showed you guys near the beginning, it takes a gigantic salt mine full of water and exploding photomultiplier tubes in order to catch two or three neutrinos a day if you're lucky. So let's say those don't really matter in terms of background dose. But when those high energy particles interact with the oxygen and nitrogen up here in the atmosphere, they produce a shower and cascade of additional ionization and high energy particles. So it's been said that solar flares and such will accelerate charged particles from the plasma in the sun towards Earth. They're deflected somewhat by the magnetic field of the Earth, but they tend to enter right here at the-- what is it? At the poles. I'm sorry, that's the simple word I was looking for. Until recently in 2001, they were looking specifically at the evidence for or against the idea that coronal mass ejections-- which means large ejections of mass from the outer layers, these sparcer layers of the sun-- was responsible for most charged particles and cosmic rays. Skipping the stuff that's not in bold, it appears to be that the CME bow shock scenario has been overvalued. So for a while, folks were saying most cosmic rays come from the sun-- that's our nearest star. By making really, really careful measurements of the energy and lifetime of these cosmic rays, this has actually been somewhat disproven that this is the major source of cosmic rays, which is pretty cool. But let's talk about where they actually come from. Reactions that you can probably understand. So extremely high protons enter the atmosphere. They all start as high energy protons. And when protons are high enough energy-- and like I do probably in every class ever, I'm going to pull up Janis to show you something. They can undergo what's called spallation. It's the same principle that the Spallation Neutron Source at Oak Ridge National Lab works on is shoot in extremely high energy protons, out come neutrons. So as usual, it didn't clone the screen right. So just bear with me for a sec. I'd like to, for probably the first time in this course, switch databases to the incident proton data. Is that actually working? OK, good. So we'll leave the incident neutron data, we'll go to the incident proton data. We'll stick with the same library. Let's see how much they have. Not much, but enough to matter, because there's a lot of nitrogen-14 up in the air. Let's see what happens when protons hit nitrogen-14 all the way at high energy. So don't quite know what a negative cross-section refers to. But at high energies, this is definitely a possible event. And let's see, there's not a lot of cross-sections to look at here. Let's try oxygen-16. Not much. We'll stick to the slides then. So when a high energy proton strikes a nucleus, it can eject neutrons. And those neutrons can then cause activation reactions, and then emit things like proton or tritium, leading to-- that's where your carbon-14 comes from. Comes from nitrogen-14. So in comes a high energy proton, releases a neutron. That neutron hits nitrogen-14, releases a proton, out comes carbon-14. So this is why it's being constantly generated in the atmosphere. It's not like there's a certain amount that was there at the beginning of the universe and decays, because its half life is only 5,700 years. So this is part of why radiocarbon dating works, because we have cosmic rays. It's kind of a neat connection to make. If there weren't cosmic rays, all of the carbon-14 would decay pretty quickly in the universe time scale. And we wouldn't have this form of radiocarbon dating. And then same thing for tritium production in the atmosphere. This is where some of that tritium naturally comes from, is it makes carbon-12-- which is the normal form of carbon-- but out come tritium, which there is going to be some constant isotopic fraction of tritium in all the world's hydrogen. Some of it's being generated in real time. And we do have these spallation sources on Earth. Like I mentioned, the Spallation Neutron Source has a gigantic synchrotron. We've seen these before, which just injects in this case protons, which circle around and around and around, accelerating until some of them are extracted and fire onto things like a liquid mercury target, some neutron rich liquid metal. So you want something that's very neutron rich. You want something that's very dense. You want something that's fluid so you can cool it better. And you want something with high thermal conductivity. That's where the metal comes in. So a liquid metal you can keep cool really well. Because when you're firing lots of 800 MeV protons into it, you generate a tremendous amount of heat. And this is what the actual thing looks like. You can get tours of this down in Tennessee at the Oak Ridge National Lab. Actually, I've driven up here before. I recognize this from the map. That's pretty cool. So where the actual neutron science stuff happens, where all the scientists sit with their targets, there's quite a bit of stuff going on behind it. So there is a gigantic-- you can see that's a parking lot for scale. There's a gigantic linear accelerator shooting into the synchrotron ring, which then fires the protons here into the target into one of any number of end stations, which creates a not quite push button, but turn on-able pulsed neutron source, which is pretty slick. And again, parking lot for scale. Takes a lot of magnets to bend an 800 MeV proton. That's what it actually looks like. Has anyone ever seen one of these synchrotrons? Like at Brookhaven or at Oak Ridge or at anywhere? They're quite interesting things. The closest one to us is the NSLS, or the National Synchrotron Light Source version two at Brookhaven National Lab. It's like a 2 and 1/2 hour drive down in Long Island. I don't know if they're doing tours yet, but it's about a kilometer around. And I was told they did bike races around it to see who could beat the protons, which of course everybody loses. But they are pretty insane collections of magnets, vacuum equipment. And once in a while a proton will pass through. And then there's the spallation source itself. So this is what the target looks like. There's going to have to be liquid metal cooled in. And then out of here come lots and lots and lots of neutrons. Enough neutrons that you still need hot cells to deal with things. They're still quite radioactive, inactivated. But it's not a reactor. Other ways of making neutrons. Speaking of, has anybody seen the pulsed fusion source that we have down in Northwest 13? No? We have a pulse neutron source that you can come take a look at. It's an electrostatic fusion pulsed machine. There's a whole bunch of tritium and deuterium in this palladium sponge, what happens to hold hydrogen and its isotopes very well. And with a very quick pulse, you can have a tiny pulse fusion and generate about 10 to the 8th neutrons that actually is a push button neutron source. So if you want to see a neutron source beyond reactors, go down to the vault in Northwest 13 and ask to see that. We did a quick experiment before trying to activate cell phones to see what was in them. We did not generate enough neutrons to do so. But this cell phone has definitely seen a few fusion neutron pulses. And we checked later, it's not giving off any residual radioactivity. At least we can measure. That was a fun failed experiment. And then comes the craziness. Since it's about five of, I want to get into things that will definitely not be on the exam. So just sit back and enjoy and stop taking notes. Complete insanity can happen with super high energy electrons. We've already talked about Bremsstrahlung. We have talked about synchrotron radiation, where you have a charged particle going along a magnetic field line. It changes direction and gives off X-rays. We haven't talked about inverse Compton scattering. Interesting process here. In comes a low energy photon, hits an electron, out comes a higher energy photon. Compton scatterings usually think of as the other way around, where a high energy photon comes in, scatters off an electron, loses energy. In this case, a high energy electron colliding with a low energy photon can impart energy to the photon. And you can actually calculate-- or in this case, I've just taken from a paper-- the energy gain from inverse Compton scattering, as well as whatever cross-section there is. And even though this is a very infrequent process-- well, the universe is pretty big, and contains a lot of things that have magnetic fields like stars and black-- whatever else happens to have magnetic fields. And you can identify radio sources by looking for these inverse Compton scattered X-rays. So the Chandra X-ray map, I believe a piece of which or a receiver for which is up in the building in Porter Square. If you guys go down three stops on the red line, you'll see this little area full of Japanese noodle shops, Lesley University, a bunch of galleries. And in a little-- I think it's still there-- and a little sign that says, oh, and there's the Chandra X-ray Observatory. Whatever. They may or may not have moved, but I recommend you check it out. And then what happens to those electrons? Well, they can actually decay. Pretty interesting things. And so some of this inverse Compton scattering has gone into the evidence for or against where cosmic rays come from, because you should see electrons of a certain energy after undergoing this process. I think I will skip ahead. Oh, I won't skip ahead. And so what these cosmic rays can produce is what's called positive, negative, or neutral pions. Other subatomic particles with masses somewhere between protons and electrons that themselves can undergo different reactions or different decays into muons. And don't worry, muons and pions and such are not part of the topic of this course. But they do have known lifetimes, they do have known masses and charges, they do have known stopping powers. We should be able to measure them. And there is a cosmic muon detector at Boston University. Or rather, it's a pair of detectors that looks for this coincidence of one muon scattering off one detector, or interacting, and another particle being sensed directly beneath. So we can actually sense these muons to confirm or refute the theories about where they come from in terms of cosmic rays. And these neutral pions are what end up creating these gamma rays. I think they were around the 70 something MeV range. So if these theories about them are correct, we should be able to sense these gamma rays, and sense how many of them there are as a function of energy. That's not what I wanted to do. Let me recreate presentation mode, because this is definitely delving into the kind of stuff that, well, I'm not an expert in. So a quick detour into subatomic physics. You guys probably have heard of that protons and neutrons are not the smallest building blocks of matter. They themselves can be composed of quarks and antiquarks with different charges and different masses. They're given different flavors. I don't know who came up with this terminology, but it's kind of fun. Things get kind of whimsical when you get into subatomic physics. And these quarks and the gluons between them are what composes protons, neutrons, in their antimatter counterparts. And these sorts of things can also undergo their own decay and reactions. So when beta decay occurs, it's actually one of these up quarks turning into a down quark and releasing an electron and an antineutrino. But again, we're not going to delve even deeper into this. There are other particles composed of other arrangements of quarks. So if you have just an up and a down, you have a positive pion. Which should have a charge-- I think one's 2/3 of one's minus 1/3. Yep, that is correct. So a plus pion should have a charge of 1/3 the charge of an electron. So if you know the mass of some particle because you know the number of quarks, and you know the charge of it, you should be able to calculate its own stopping power. And figure out how many should get through the atmosphere and such, or how many get absorbed in your detector. And so when a very high energy proton collides with an atmospheric molecule, it creates neutrons, creates a shower of pions. These neutral pions-- much like electron positron interaction-- can produce their own gammas. So they can spontaneously decay from particles that are mass into gamma rays of pure energy. Which then go on to create their own shower of electrons and positrons by pair production. Because as we saw, the higher in energy you go for photons, the more likely pair production becomes. And there you have it. This is part of the 22.01 stuff, but taken to the nth degree. And then the evidence for pion decay comes from extremely fine measurements of the number of these pions as a function of energy. Or in this case, look at that. They're-- what is it-- ergs per centimeter squared per second. What does that unit physically mean? Not going to get into that. But at any rate, there are different models for how many of these pions or their high energy gamma decay products should be observed. And by looking at those very carefully, you can tell where they came from. Should they have come from coronal mass discharges, so we should know what energy those protons should be, or some other source But I am going to stop there, because it is five of. So after that crazy detour, give you guys 10 minutes to degas and absorb some neutral pion gamma decay products. We'll meet upstairs in 10 minutes for an exam review. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 8_Radioactive_Decay_Modes_Energetics_and_Trends.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: Today we launch into radioactive decay. And so this is kind of what makes us, us in this field, right? Now that you've learned the general cu equation we're going to look at some very simple, specific cases, and specifically all the different things that can come flying out of nuclei and the orbiting electrons around them. First I'd like to try and develop a generalized decay diagram. What are all the different ways that nuclei can decay? And I had written one of these up to show on the slides, and my one-year-old son fixed it with a bunch of markers and crayons, so I think we're going to have to redo this from scratch. So let's say you had a generalized unstable nucleus over here. And we're going to start drawing a generalized decay diagram. You'll see decay diagrams, well, much like these. I've already shown you a couple of these, like these decay diagrams for uranium 235, soon as I clone my screen so you can see it. There are a couple of axes that aren't drawn on these decay diagrams that will help you interpret them. And the first one, the imaginary y-axis, is in order of increasing energy. And the second imaginary y-axis is z, atomic number. So this will help you determine how we read these and how to actually write them. Now what are some of the different ways you've heard of things that can radioactively decay, or that you might have read from the reading? Just yell them out. AUDIENCE: Alpha decay. MICHAEL SHORT: Alpha decay. So in alpha decay, what actually happens? Let's say that we had a parent nucleus with atomic number z and mass number a. What does it change into? Anyone know what an alpha particle consists of? Yeah. AUDIENCE: A helium nucleus. MICHAEL SHORT: A helium nucleus. So let's just say helium. This will be a 4 and a 2. and there's going to be some daughter nucleus-- we don't know what-- with z minus 2 protons and a minus 4 total nucleons. So if we were to describe alpha decay on a decay diagram, where would we write the final state of this alpha decayed daughter nucleus? To the left or to the right? I know it's like 9:00 AM, but someone just shout it out. You don't have to raise your hand. AUDIENCE: To the left. MICHAEL SHORT: To the left. Yep. Something that's decreasing in z and also decreasing in energy, we would draw an alpha decay like this to the left. So let's say this would be something more stable with a z minus 2-- make that clear-- and an a minus 4. What are some other ways things can decay? I heard a whisper. AUDIENCE: Beta. MICHAEL SHORT: Beta decay. So what happens in-- usually by beta decay, we're referring to beta minus decay, which would be the emission of an electron from the nucleus. Again, what's the physical difference between a beta particle and an electron? Nothing. What's the nomenclature difference? The beta comes from the nucleus. Otherwise, when they come out, they're kind of indistinguishable. So what happens in beta decay? Let's say we have the same parent nucleus starting with z,a. We know it emits an electron with no mass. And what else? This is just a matter of conservation of things here. AUDIENCE: Anti-neutrino. MICHAEL SHORT: There is an anti-neutrino which has pretty close to no mass and no charge. And what about this daughter nucleus? How many protons and total nucleons would it have? Yeah. AUDIENCE: Should have one more proton. MICHAEL SHORT: Should have one more proton and how many more total nucleons? The same. Yep, like that. And so how would we draw beta decay on this generalized diagram, to the left or to the right? AUDIENCE: Right. MICHAEL SHORT: To the right. It's increasing in z. I haven't defined any scale, so let's just say that's a change of 0. That's 1. That's 2. And that's plus 1. That's plus 2. Hopefully we won't get to today. So a beta decay would proceed thusly. So you'd have some other stable nucleus with c plus 1 and mass number 8. What are some other decays you might have heard of before? AUDIENCE: Electron capture. MICHAEL SHORT: Electron capture. So in electron capture, what actually happens? Start with the same parent nucleus. In this case, the nucleus actually captures an electron from one of the inner orbitals. And so that, in effect, like, neutralizes a proton, right, in terms of charge. So what do we end up with? Yep. So we'd have some daughter nucleus. If it neutralizes a proton, we'd have one fewer protons. And then how many total nucleons? The same. Yep. There we go. And so if we were to draw electron capture on this map, we would have one fewer proton. So we could have some sort of decay by electron capture. And anything else? What other particles can be emitted from a nucleus? Yeah. AUDIENCE: Positrons? MICHAEL SHORT: Positrons. So let's get this list going up. So if we start off with a parent, z and a, we know we emit a positron, which is the anti-matter equivalent of an electron. So same general characteristics except opposite charge. In this case, we'll give it a 0 protons and 0 neutrons. And we end up with-- well, the same daughter nucleus. So we could say that this precedes by positron creation or electron capture. It's the same process, or the same ending state. But can you have positrons in any possible decay? We actually went over this once. Anyone remember? Yeah, so you're shaking your head no. AUDIENCE: You have to have a certain energy, but I can't remember what the energy is. MICHAEL SHORT: We'll get back into that. You're right. So I'll put a little box around this because you have to have a certain amount of energy in order to create the positron. And what else? What about the easiest one? What else can be emitted from a nucleus? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: I heard a couple of things. Neutrons. So certainly if you emit a neutron, there are some very unstable nuclei, like helium 5, which exists for what, 10 to the minus 26 seconds or something, that could omit a neutron. If we start off with z and a, then we'll start off with a neutron and a daughter with the same z and a minus 1 total. So what would that look like on a decay chain? You don't usually see this, but we'll draw it anyway. It would go straight down, right? So there'll be some other nucleus. So it'd be the same z, but an a minus 1. And it could decay by neutron emission. Yeah, that totally happens. If you look at the very, very right edge of the table of nuclides-- let's go back to the home page for that-- and look at the super neutron rich. Like helium 10. Who's ever heard of this? Doesn't even say, let's say, two neutrons. So this is so unstable that it just immediately spits out two neutrons. So yeah, these things happen. You won't tend to see this decay in textbooks because it only happens for exceptionally unstable nuclei. But yeah, that's true. It does happen. What else could happen? Remember we've been talking about-- yeah. AUDIENCE: Gammas? MICHAEL SHORT: Right. Could be gammas. And so I'll make one little extra piece here for gamma decay, which is nothing more than a photon emitted from the nucleus. We start off with a parent z and a. And this becomes-- well, what? Should I even write daughter nucleus? I see some people shaking their heads no. Why not? Yeah? AUDIENCE: You have essentially the same atom. It's just one of its electrons should be at a lower energy state. MICHAEL SHORT: Yep. Very close. You have the same atom, so let's say the same parent, with the same number of protons, the same number of total nucleons. And I'll just correct that to say one of its nuclei is at a lower energy state. But otherwise everything is completely correct. So why don't we put a little star here to say that that was at an excited state? Just like electrons can be promoted to outer shells, pick up a little bit of energy, so can nucleons. So can protons and neutrons. And this is going to be a subject of, well, great discussion in 22.02. For now all you have to know is that nucleons, like electrons, can occupy higher energy states. And when they fall down to lower energy states, they can release that energy in the form of a gamma ray. So you could also have, let's say, squiggly line gamma decay to something stable. And so this right here would be the generalized decay diagram. Anyone ever heard of one isotope that undergoes all these possible decay mechanisms? Glad no one's saying anything, because neither have I. There's one that comes close. Actually, if you look at-- no, that's not this part I want to show you. I want to show you the big one. If you look at potassium 40, the nuclide we probably talked about the most so far, it covers most of the space of this generalized decay diagram. And there was a question that came through-- at least, I think for non-anonymous email, what is it that makes these even, even versus odd, odd nuclei less or more stable? Anytime you have an odd, odd nucleus, both the number of protons and the number of neutrons, these nuclear shells are not fully occupied and they're not that stable compared to an even, even nucleus that has an even number of z and an even number of n. Just kind of like electrons, these things tend to travel in pairs. And not fully occupied energy levels will be left stable. Potassium 40 happens to be one of those odd, odd nuclei that is relatively unstable. And it can go either way. Either you can lose a proton or you can gain a proton by competing mechanisms like positron or electron capture or beta emission. So this one I like a lot because it gives you almost every possible decay with the exception of alpha decay and spontaneous neutron emission. It's not that unstable. Then the only one really missing from here, I found what I think is the simplest decay diagram ever, dysprosium-151. There's only one thing it can do is it can decay by alpha decay to its ground state. I want to point out a few of the features of these decay diagrams so you know what to look for. Up here is the parent nucleus. Down there is the daughter nucleus. And these energies are not absolute. They're relative to the ground state of whatever the daughter nucleus is. So simple example helps show you that gadolinium-147 doesn't have a binding energy of 0. This is relative to the ground state of gadolinium-147. And that will tell you that the Q value for this reaction is 4.1796 MeV. These things are usually listed in MeV unless said otherwise. You also might notice a pattern that most alpha particles tend to come out around 4 MeV or larger. The answer to why is going to be given in 22.02. Yep. AUDIENCE: Where do these percentages come from? MICHAEL SHORT: These percentages tell you the probability that each decay will happen. AUDIENCE: Oh, yeah. Like how do we derive-- how do we find those out? MICHAEL SHORT: Ah, these are usually measured because it can be-- let's say things get quantum and difficult in terms of calculating these. And our knowledge of wave functions of, well, higher and higher a or z nuclei gets a little more tenuous. So a lot of these would be measured. You can look at the number of alpha particles of each energy that you observe, and then you get the average probabilities. For this one, it's simple. There's 100% probability that this is the only thing that exists. The other things to note, the half life will be given up here, in this case, at 17.9 minutes. So relatively long half life compared to helium 5. And we'll be going over what half life is and what they are on Friday. And then the last thing are the spin states of the initial and final nuclei, which we will not cover in this class, but you will cover in 22.02. So don't worry about those now, but do know that when you need to go find the spin states of the initial and final nuclei to see if certain transitions are allowed, this is where you're going to go. Any questions on what you see here on how to read these decay diagrams? Cool. OK. Then let's move on to the simplest of them, which, in the table, can look the most complicated. So here you can see that there's a whole bunch of different probabilities for different alpha decay nuclei. This is one of those more complex examples where the easiest thing to do is just measure, see how many alphas you get at each energy, and this will give you the approximate probabilities that each decay happens. And you'll notice here that the final energy states for each of these alphas is not necessarily 0. This will tell you what they are relative to the ground state of, in this case, thorium 231. So you can emit an alpha from any combination of nuclear shell levels inside this nucleus. And you might end up with a new daughter nucleus whose protons or neutrons are in excited states. And the way you remove those excited states is gamma decay, like we talked about here. So a lot of alpha decays are immediately followed by a chain of gamma decays, or what we call ITs, or isometric transitions. So you'll see a couple bits of notation. For example, gamma decay, you may hear it called isomeric transition. We'll try to give them all so that in the various readings you have, you know what's what. So notice here, you can have, with a probability so small that they didn't bother to draw it, an alpha decay 2.634 MeV. And then any series of gammas from, let's say, from this state to that state, and then from this state to one of those or one of those, and then another one down there. So an alpha decay may be followed by a whole bunch of gamma transitions, or as few as none. If you want to see what the alpha energies are, well, let's head to the table of nuclides and look at uranium 235. So if we look up U 235, you can see that it alpha decays to thorium 231. And I'll show you the part of the table that I didn't show you in the slides, which is then you've got a table of alpha decay energies as well as relative intensities and what's called a hindrance. This stuff right here comes from the fact that different alpha decay energies can happen with different probabilities at different times. So the half life of a particular alpha decay can be slightly different. And this is another one of those really kooky things, where certain energy alpha transitions will happen a little more often initially then finally. But we don't have to worry about that yet. I just want you to know that's why the hindrance is there. And so you can look, from this table, what's the probability that each of these alphas will come out. And there's going to be some uncertainty associated with these. This is going to usually be some sort of measurement uncertainty. Then you might also ask, why is it that the highest energy alpha ray is not the same energy as the Q value? So for this, it's a greatly simplified application of the Q equation that we learned last time. So for here, what are the two equations that we need to conserve if we have a system consisting of-- we have our initial nuclei going into our final nuclei. And they go off in equal and opposite directions. If it's alpha decay, then we have no little initial nucleus. We just had a large initial nucleus at rest. And afterwards, you've got a small final nucleus, which we know is the alpha particle, and a large final nucleus, which we'll call the daughter product. And let's say this is the parent. It's a much, much simpler system than the general one we analyzed last week. So what are the equations that we'll use to serve to find out what's the energy of this alpha particle? Anyone? Same three answers as always. Yep. AUDIENCE: Mass, energy, and momentum. MICHAEL SHORT: Yep. Mass, energy, and momentum. I'm going to lump these two together because they're kind of the same thing. So let's just go with energy and momentum. So what is the initial kinetic energy, or let's say, the initial kinetic energy of this parent nucleus we can assume to be 0. What about the final kinetic energy of the system? Well, there's only two particles. There's going to be some kinetic energy of the alpha particle plus the recoil kinetic energy because if the alpha goes in one direction, the daughter nucleus has to go off in the other direction. And the total energy comes out to Q. This Q value you can get by conserving mass, where we can say that the mass of the parent has to equal the mass of the alpha plus the mass of the daughter plus Q. So that's where we can get Q if we don't know it already. Luckily, we know it already. So there we've used mass. There we've used energy. And now what are the momenta of the initial and final states here? Anyone? Just shout it out. What's the initial momentum of the parent nucleus? AUDIENCE: 0. MICHAEL SHORT: 0 equals-- what's the momentum of the alpha? Anyone remember that trick if we want to say p equals mv equals what more convenient form that contains the energy? Square root of 2 mt. So let's go with that. So there'll be the square root of 2 mass of the alpha, kinetic energy of the alpha, minus the square root of 2 times the mass of the daughter times the kinetic energy of the daughter because these have to have equal and opposite momenta. So all we have to do is move that one over here. This makes that equation easy. Everything's got a square root of 2. We can square both sides. And we end up with a pretty simple relation, mass of the alpha times the kinetic energy of the alpha is the mass of the daughter times the kinetic energy of the daughter. We don't usually care about the kinetic energy of the recoil nucleus or the daughter because the range is so small that we usually don't get to measure it. But we are trying to measure what are the actual alpha particle energies so that we can reconstruct this table down here. So we can take our energy conservation equation and rearrange it to isolate td, the kinetic energy of the daughter, and say td equals Q minus t alpha. Substitute that in here. And let's rewrite what we've got. Mass of the alpha, t alpha, equals mass of the daughter times Q minus t alpha. If we multiply each term in here by md, we get mdQ minus md t alpha. Then we can take all of the t alphas on one side. So we'll just add md t alpha to each side. So we have m alpha t alpha plus md t alpha equals md Q. We can factor out the t alpha here. And then we can divide each side by m alpha plus m daughter. Cancel out the ma plus md. And there we have the answer. The kinetic energy of the alpha is just the q value times the ratio of the daughter mass to the total mass. This should look awfully familiar. When we did this in the frame of neutron elastic scattering or any other reaction, we had the same equation with just different notation. So do you guys recognize this firm, where we had t3 equals Q times m4 over m3 plus m4. It's the exact same result, just different notation. Last time we did it in the most complex way possible. This time we started off with the simplest possible equations for alpha decay. In the end it's the same Q equation. We just didn't bother with all the other terms and angles and things that we don't need. So is everyone clear where this came from? Cool. And that's why you're never going to see an alpha particle that's got the same energy as the initial minus the final energy because the recoil nucleus, or the daughter nucleus, takes away some of that kinetic energy in order to conserve the momentum of the system that was initially at rest. Another way to say this, for those who like center of mass coordinates, is the center of mass of this system was just the parent nucleus. It was at rest. The center of mass of the final system has to remain at rest to conserve momentum. But again, I won't go much into center of mass because I find it a little unintuitive. I'll stick with a laboratory frame of reference. So any questions before I move on? Alpha, I think, is the simplest case of radioactive decay. And I think now you know all you need to know about it. Yes. AUDIENCE: So why do you get so many different types if we just calculated it? Like mb, in mass [INAUDIBLE] change? MICHAEL SHORT: Not ma and md. But ta and td would change. Yep. So in this case, for different alpha decays, they'll have different Q values. So the Q value of, let's say, this top alpha decay is this energy here, 4.676 MeV minus 0.634. So use a different Q and you'll get different ta's and td's. So don't worry. You'll get chances to try out these calculations on the homework, where I'll actually ask you to calculate some of these from this equation, make sure you get the same values as the table. Any other questions on alpha decay before moving on to beta? Just going in order of the Greek alphabet. So beta decay is a kind of funny one. You don't tend to get a beta particle out at the energy of this Q value. You actually end up getting a spectrum. And this measured spectrum of different beta kinetic energies is what led to the thoughts that there must be something else carrying away some of that extra mass or some of that extra energy. I say that like it's the same thing because it totally is. And this is what led to the thinking that there's got to be some other very difficult to detect particle. So the theorists here we're saying, if we know the initial and final energies from beta decay, and we know that we get a spectrum of different beta energies and the probability of finding a beta particle at energy Q drops to, like, 0, you'll almost never see it. There's got to be something else carrying away the energy. So this idea of the neutrino, or in this case, the anti-neutrino, was proposed a long time before it was confirmed. And finally we know why. And one of the questions I want you to think about, because it might be on an exam in exactly two weeks, is if this is the relative number of electrons from beta decay as a function of energy, what does the number of anti-neutrinos versus energy look like in order to maintain conservation of energy? So it's something I want you guys to think about, but I'm not going to tell you what it is until the solutions for an exam. In the meantime, another thing to note is that these beta decays can also be followed by any number of gamma transitions. I've given you a simple one. If you want to look up simple ones to test your knowledge, go with the light elements. They don't have that many nucleons and they won't have that many transitions. For example, if we pick a beta decay nucleus, something simple. Let's go with lithium, which typically has-- the stable isotopes are lithium 6 or lithium 7. So do you think that higher or lower mass number lithium will tend to go by beta decay based on this generalized decay diagram? It's what? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Lower. Lower proton number? Well, we've got to stick with the number of protons because we need to remain lithium. So in other words, do you expect lithium 4 and lithium 5 or lithium 8 and lithium 9 to go by beta decay? AUDIENCE: The higher ones. MICHAEL SHORT: The higher ones. OK. If you guys remember the mass parabolas from a couple of weeks ago, we delineated where you'd expect beta decay in order to increase the proton number. So if you've got too many neutrons and not enough protons, chances are beta decay will help equalize you out. So as a guess-- I haven't even tried this at home. Let's see. Let's see what happens with lithium 8. Oh, look at that. Beta decay. It can also decay by beta plus 2 alpha, which is another word for the nucleus just blows apart. It's interesting, too, if you read Chadwick's paper again, the way he described a beryllium nucleus is consisting of a neutron plus two alpha particles. Interesting, huh? Lithium 9 could decay by--or let's say lithium 8. What do we have? Beta plus 2 alpha. Yeah. So Chadwick described any nucleus as consisting of these elementary-ish particles that you could measure. And in this case, you kind of see a physical example. When this nucleus blows apart, it just becomes two alphas in a beta. Interesting. But let's look at the beta decay to beryllium 8. Pretty simple. You may ask why can't you have beta decay directly from the highest energy to the ground state energy? That is a 22.02 question that I'll mention. There are allowed and unaligned transitions between spins and energy states. So if you're wondering why isn't every line drawn, in the case of really complex nuclei, there aren't enough pixels on the screen sometimes. But for the simple nuclei, there are actually rules of selection to decide when you can make this transition. But a lot of beta decays will usually be something like a beta decay followed by a gamma. So let's see a couple of well-known examples. For example, carbon-14. This is the basis behind carbon dating, one of those rare instances when you have a beta decay directly to the ground state. It's about as simple as it gets. And because the half life is 5,730 years, it's really useful for dating when did an organism or piece of material die on the timescale of, let's say, tens to tens of thousands of years. Once you've gone past a few half lives and there's very little carbon-14 left, there aren't a lot of decays left and your counting statistics get crappy, and it gets harder and harder to carbon date things. The basis behind this is that all living organisms that are intaking and exhaling carbon by some means remain in isotopic equilibrium with the carbon surrounding them. And while most carbon is CO2, and food and whatever is carbon-12, you're going to have a little bit of carbon-14 production from the upper atmosphere. This is usually a cosmic ray phenomenon, which we'll get into when we get into cosmic rays. The moment you die you stop intaking carbon, and the little bit of carbon-14 in the cloth and the food and your body, whatever, starts to decay naturally with a very regular decay curve. And so this is the whole basis behind carbon dating. And in the next p-set, you'll actually see how this was used to debunk the Shroud of Turin, or the supposed burial cloth of Jesus of Nazareth, because the carbon dating data just didn't check out. As much as people really wanted to feel like we found it, no. Science. That's the answer. No. Another well-known one we've talked about before is molybdenum 99 decaying to technetium 99 meta stable. Notice how here, any number of beta decays and any cascade of very fast gamma transitions, they almost all end right here at this state of about-- let's see, there's two numbers written over each other. But it's about 140 keV or 0.14 MeV. This transition from this state to the ground state is a slow transition. So you can actually build up technitum 99 in what's called series decay, which we're going to cover on Friday. And then you can use these 140 KeV gamma rays to do medical imaging. So when you get a medical imaging procedure done, chances are this is how it's done. You get moly 99 out of a reactor or an accelerator, chemically isolate the technetium 99 meta stable, which lasts on the order of six days or so, very quickly get it to someone, inject it, and image where do the gamma rays go, or where do the gamma rays come from? One last notable one is responsible for a lot of, well, problems when folks go urban exploring in old dentist's offices. Nowadays they have electrostatic x-ray machines at dentist's offices. But back in the day, you could get a little button of cobalt 60, which would emit two very characteristic gamma rays in addition to its beta decays. So normally what happens is cobalt 60 decays quickly to an excited state and gives off two gamma rays in succession, which would be used for imaging. Problem is that's the a cobalt source. And if you don't know what it is, and you're like, oh, cool, what's this blue thing, I think I'll put it in my pocket and keep it-- that has been responsible for some injuries from some folks that didn't know any better. And then how do you detect the neutrinos? We talked about the theoretical reason why they exist. Let's actually see how they're measured. There is a hollowed out salt mine of some sort called Kamiokande in Japan. It's a humongous hole in the ground filled with water, for a reason, and lined with tens of thousands of highly sensitive photo tubes that can pick up tiny, tiny amounts of light. The reason for this is because neutrinos, as you saw in problem set 1, are always traveling near the speed of light in a vacuum. So if the speed of light in a vacuum, let's call that 1, and the velocity of the neutrino-- wasn't it something like 9. 999c or something like that? It was pretty high. The speed of light and water is significantly less than the speed of light in a vacuum. When you have a material or a particle that goes faster than the speed of light in the medium that it's traveling in, then you can produce what's called Cherenkov radiation, which I think I've mentioned once before. It's kind of like a sonic boom in that you get a conical shockwave of energy radiated from that particle that tells you which direction it's coming from. But instead of a sound wave, you get light. And this whole detector is designed to look at the ellipses of Cherenkov radiation released by neutrinos and anti-neutrinos. So what happens is if a neutrino happens to interact with the water here, it produces Cherenkov radiation lighting up a ring of these detectors so you can tell it's energy and you can tell where it came from. So if you, let's say, can correlate a supernova or some sort of crazy galactic whatever with a slight burst of neutrinos, then you've got a pretty significant astronomical event. It also led to my favorite BBC headline ever, "Particle physics telescope explodes." You'd see this on, like, Fox News or something. No, this was the BBC. What happened here is one of these 30,000 or so tubes was slightly defective, couldn't hold the pressure, and it burst. And the resulting sound shock wave from one photo tube bursting blew up about 11,000 of them. So yeah, the particle physics telescope kind of did explode. They did rebuild it and it's still going. It was an expensive repair because all 11,300 something tubes had to be rebuilt. And if you notice, there's a guy on a boat there. How do you install them? Well, you float on a boat quietly, and put the photo tubes in, and raise the water level, and float to another part of the detector quietly, and continue installing the photo tubes until you're done. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah. You don't. Yeah. Don't sneeze. So yeah. Favorite BBC headline ever. Thanks again, science. For positron decay-- OK, we've got about 10 minutes left-- for positron decay, this is the energy that you need in order to make a positron. It is approximately exactly double the [INAUDIBLE] rest mass of an electron. And the question usually comes up, well, a positron has a rest mass energy of 0.511 MeV. Why do you need double that to make the positron? Because in order to conserve the charge of the system, you have to shed an orbital electron. So the system has got to be able to lose two electrons in the process, one positively charged and one negatively charged. And so that's why the Q for positron decay is just going to be-- remember, this symbol's the excess mass here, excess mass of the parent minus excess mass of the daughter minus 2 times the rest mass of the electron squared. To refresh your memories a bit, find some empty space. The excess mass is nothing more than the mass minus the horrible approximation of the mass. So the excess mass and the real mass are directly related. And these are things that you can look up. Just to remind you guys that excess mass and mass and binding energy and kinetic energy are all related, again, by the Q equation. It's probably the last time I'll say it because I think that's about 100, by my count. Positrons can be used for some pretty awesome things. And in the last five minutes or so, I want to show you some work done by Professor Brian Wirth at the University of Tennessee, Knoxville on positron annihilation spectroscopy, using anti-matter to probe matter and find out what sort of defects exist. And as a nuclear material scientist, I'd be, well, terrible if I didn't inject a little bit of materials and how we use nuclear stuff in 22.01 in order to probe that thing. So the way that positron annihilation spectroscopy works is that, well, matter's mostly empty space. And then in a regular crystal lattice, where the atoms are arranged in a very regular array, let's say these atoms have their orbital electrons. The empty space between is also arranged in a very regular array. And positrons annihilate with electrons to produce-- well, we'll find out in a second. But where in matter would they want to live, or where would they last longer? Not near an atom, but near the space in between. So you can map out the empty spaces in matter in a regular crystal and calculate an average positron lifetime. If you were to fire a positron into this matter, how long would it sit and bounce around before colliding with an electron and releasing that extra rest mass energy? It turns out if you have crystalline defects, the positrons tend to last a little longer. There's a little more empty space, which is to say there are more places with a slightly less probability of finding an electron. And so they last longer. And you can measure the lifetime of positrons before they enter the material, and then how long before they produce their characteristic destruction gamma rays. So if you think about it, you have a positron coming in with a rest mass 0.511 MeV. And it collides with an electron from some orbital nucleus that has the same rest mass. The positron and the electron annihilate sending off gamma rays in opposite directions, where the energy of this gamma is same thing, 0.511 MV. So you can tell when a positron was destroyed because you instantly get 1/2 a MeV gamma ray. Or actually, you get two 1/2 MeV gamma rays. Then the question is, how do you tell its lifetime? Let's go back to something that I didn't quite point out, but I want to show you now, is this positron decay is immediately followed by a 1.27 MeV gamma ray, which in PAS, or Positron Annihilation Spectroscopy, we call this the birth gamma ray. This gamma ray is emitted the instant this nucleus is born. And the positron takes a little bit of time to get destroyed. So you actually look at the difference in time between sensing the 1.27 MeV gamma ray and the 0.511 MeV annihilation photons. And that is measured in, let's say, hundreds of pico seconds with resolution of around 5 picoseconds. And you can then tell, from the lifetime and how many survive, what sort of atomic defects might exist in the material. So if you want to count the number of missing atoms or vacancies in a material, which is extremely important to those of us in radiation damage, you can do so with positron annihilation spectroscopy. So I think I wanted to show you a little bit about how this works. You start off by making a radioactive salt sandwich. You take some sodium chloride, specifically of the isotope sodium 22, which is giving off positrons all the time. And you sandwich that radioactive jelly between the two slices of bread, better known as your sample. That way you catch every positron that gets out so you don't lose half of them to one side. You've got two detectors on either side waiting. So there's some probability that the photons emitted are going to go in the direction of the detector. So you miss most of the signal, but so what? Whenever you actually sense a 1.27 MeV gamma ray followed by two 511 KeVs here, then you know you've had a positron annihilation event, and you can actually count the time between when those things happened. And you can see the number of counts and get the average positron lifetime from finding out how many counts you get every five pico seconds, for example. There's something to note about these counting spectra. Anybody know why they're so smooth up here and then they're so delineated down here? Anyone have an idea? You're going to see this a lot in 22.09, when you actually count theta particles or alpha particles and your counting statistics get a little crappier. This is a log scale of counts, or in this case, counts per five pico seconds. 10 to the 0's better known as 1. So you're looking at one count or two or three. You're looking at the discrete event. You can't have one and 1/2 counts. So you're going to see this kind of thing quite a lot when you're trying to count very rare events. And if you're down in the weeds like this, let's just say your statistics aren't that good. But since this is a logarithmic scale, 10 to the 4th is better known as 10,000, that's enough to get good statistics and fit a nice curve to this positron lifetime thing. This is what one of them actually looks like. And you can kind of tell. Inside there is where all the positrons are coming out. So that's probably lead shielding. Here's two detectors on either side. And here's another detector to detect that 1.27 MeV birth gamma ray. So if you get those three events happening all at the right time, you've got a positive event that you can count. And last thing I'll mention is you can actually use this, like I said, to gut not just the number of vacancies, but the number of different size defects. You might have two or three missing atoms next to each other, which will have different positron lifetimes. And you can actually count the number of each of these to get the diameter or the size of these atomic defects. And this is one of the ways of confirming our models of radiation damage, which is, like, all I do. That's half of our group. If you want to read anything more about positron annihilation spectroscopy, all the stuff in these slides were from these references, which you can look up easily on the MIT libraries. We have access to everything because that's MIT. We just buy everything there is. So I'd encourage you to look here if you want to see more details on how this works and why it works. So because it's exactly five of five of, I want to open it up to any questions on alpha decay, beta decay, positron decay, or the decay diagrams that we've developed today. Yes. AUDIENCE: What is the most dangerous kind of decay? MICHAEL SHORT: What is the most dangerous kind of decay to be exposed to? So in this case, you'd want to say the energy of the particle is held constant, and the number of those particles is held constant. And actually, we're going to answer this question when we get to medical and biological effects. But let's do a little flash word now. Let's assume, if you want to see which one of these decays is most dangerous, we'll have to say constant-- constant-- energy of decay, constant activity, and what else can we hold constant? Well, constant you. Let's say the same number of particles end up hitting you. That depends on whether they're inside or outside your body. If you were to ingest material, then alphas would be your worst because alpha particles are massive and charged nuclei, which means they interact very strongly with matter around them. So if you ingest them and they end up incorporating into your cells, where they can just get next to DNA, they can just blast it apart. However, an alpha source of equal strength held in your hand would do nothing. The dead skin cells are enough to stop alpha particles. And we're going to find out exactly why when we look at the range and stopping power of different particles and matter. From the outside, alphas won't really get through your skin. Betas might get through a little bit of your skin, but not much. Gamma rays will mostly go right through you. It's neutrons that are the real killers. Those neutrons are heavy but uncharged. So they interact kind of strongly. When they do hit, they pack a wallop and they do a lot of damage. And they're mean free path, and you is on the order of 10 centimeters. So a neutron source from the outside can do a lot of damage from the outside. The alphas and the betas would be stopped by your skin and clothes. The gamma rays, almost all of them will go right through you. And you guys will actually have to do this calculation to find out how many gamma rays would you absorb from a gamma ray emission, and how many go right through you. The hint is most of them get out. So there's an exam question we used to ask in 22.01 that I was asked during the first exam, is you've got four cookies, an alpha emitter, a beta emitter, a gamma emitter, and a neutron emitter of constant energy and activity. You must do one of the following. You have to hold one in your hand at arm's length. You have to put one in your pocket. You have to eat one, and you have to give one to a friend. What do you do and why? Anyone have an idea? Pop quiz. Yeah? AUDIENCE: Probably give the neutron one to a friend. MICHAEL SHORT: That's right. I can tell this is the west, because when I asked a group of Singaporean students the same question, they would eat the neutron to save the friend because of Confucian ethics. Yeah, it doesn't fly here. Your answer is correct because this is America. What would you do with the other three? AUDIENCE: Eat the gamma. MICHAEL SHORT: Eat the gamma because most of the gammas will just get to the friend, right? What about the alpha and the beta? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah. Hold the beta at arm's length because there's another aspect of shielding betas that we'll get into. When betas stop in material, they produce some low energy x-rays called bremsstrahlung. So you'd want to get those far from you. And the alpha in your pocket will just be absorbed by the pocket. Yeah, so that's the right question. So you're not going to see that on the exam. But good news is you pretty much got the right answer because this is America. Probably time for one more question if anyone has one. Cool. If not, then I want to remind you Amelia will see you on Thursday, so do come to class Thursday. I'm going to change the syllabus to reflect that. And we'll have two hours of class on Friday to get through decay and activity and half life, followed by an hour of recitation. So I will see you guys Friday, and we'll see what mood I'm in depending on how the nano calorimetry goes. Could be a fun measurement. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 15_Photon_Interaction_with_Matter_II_More_Details_Shielding_Calculations.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: Today I want us to go more in-depth into the photon interactions with matter, and we're going to bring the theory back to something that we can actually start to use in doing shielding calculations. If you want to find out how much of this material do I need to shield this much gammas, we're going to answer that question today. First I want to start off, again, with Compton scattering because I messed up a couple of the energy things from last time. I got excited due to an energetic coincidence between the photo peak and the continent of our banana spectrum and one of the examples in the book. So I'm going to correct that now, and we'll go through in more mathematical detail why that wasn't the case, and what the actual quirk of physics is because there is a constant energy thing here that I want to highlight to you guys. So skipping ahead on the photoelectric effect, which I think was similar, to review Compton scattering. It's the same thing that we saw between two particles, except now one of them is a photon. And like I said, on the next homework, after quiz one, you guys will be doing the balance between energy and momentum, because the photons don't really have mass, in order to figure out what's the relationship between the incoming energy of the photon, the outgoing energy of the photon, and the recoil energy of the electron. And so these are the relationships we were showing last time. It is an interesting quirk of physics that the wavelength shift itself does not depend on the energy of the photon coming in. As you can see, it just depends on the angle that it scatters at and a bunch of constants, where that m right there stands for mass of the electron. Now that wavelength shift-- while that wavelength shift does not depend on the energy of the incoming photon-- the recoil energy does. You can see it depends on both the energy and the angle. And the incoming energy of the photon equals h nu. And to give that quick primer on photon things, I want to show you guys here why that's the case. So even if you have a constant wavelength shift that might give you a non-constant energy shift. So even though in this constant scattering formula, the wavelength shift only matters with the angle, the energy shift actually depends on the angle and the incoming energy of the photon. So now let's look at a couple of limiting cases. So as we have e of the photon equals h nu goes to 0, what does t approach? The recoil energy of the electron. Let's just do out the formula. This recoil energy equals h nu, which is the energy of the incoming gamma times 1 minus cosine theta over mc squared over h nu plus 1 minus cosine theta. As h nu approaches 0, what happens here? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah. H nu goes to 0. This fraction goes to infinity. And this goes to 0. Hopefully that's an intuitive explanation. If the incoming photon has 0 energy, it can transfer 0 energy to the electron. Now the more interesting case. What happens now as e gamma approaches infinity, as the photon gets higher and higher in energy? AUDIENCE: T just approaches h nu. MICHAEL SHORT: T approaches almost h nu. I actually want to do a quick calculation without doing all the limit math. Let's say we had energy of the gamma was 1 GeV, an extremely high energy. So all we'd plug in-- and let's say we wanted to find out what's the maximum energy of this recoil electron. And this is something I want to ask you guys. I can't remember. Yesterday did we say that t is a maximum at theta equals pi or pi over 2? What did we say yesterday? AUDIENCE: Pi over 2. MICHAEL SHORT: Interesting. It's pi, actually. The case where you have the largest energy transfer-- and sorry for not catching that-- is just like in a nuclear collision, if the photon were to back scatter, it transfers the maximum energy to the electron. So the analogy here is, like, perfect. Between two particles hitting and between a photon and a particle hitting, the maximum energy is when theta equals pi. And let's actually plug that in to find out why. If we say t max equals h nu-- depends on the electron coming in-- 1 minus cosine theta over mc squared over h nu plus 1 minus cosine theta. At theta equals pi, cosine theta goes to minus 1, and so then that also goes to minus 1. And so this becomes 2h nu over mc squared over h nu plus 2. And so that without worrying about the numerator, especially in the limit of very high energy photons, you can see that that actually maximizes the recoil energy of the electron. The reason we're harping so much on this recoil energy of the electron is because that's what we measure. So when we look at our banana spectrum, you're not measuring the energy of the photon. You're measuring the recoil energy of the electron and the ionization cascade that happens as all those electrons smash into each other, creating electron whole pairs, which are counted as current. If you guys remember from last class-- in fact, I'll just bring up the blackboard image because we can do that. So I took a picture of the board yesterday, photon interactions Part 1. There it is. We'll just use the screen as a bigger blackboard for now. So if you guys remember, let's say a gamma ray comes in and causes a Compton scatter event or a photoelectric emission, or whatever. It doesn't matter which process. And it liberates an electron, either by scattering off of it or just getting absorbed and ejecting it. Or it doesn't really matter how, but it creates this electron hole pair. That electron right here has this recoil energy, which depends on theta, the angle that it scatters and the incoming. Energy and that electron's going to keep moving in this material, knocking into other electrons very, very efficiently so that most of the energy of that electron recoil actually gets counted as other electrons being freed. We're going to go over on-- well, next Friday-- electron nuclear interactions, including what's the probability in energy transfer when electrons slam into each other or when ions slam into electrons or each other? So let me go back to the slides. And so this maximum, as this approaches infinity, this actually approaches a value of h nu minus .255 MeV Just to do the quick calculation to give a numerical example, if I plug in theta equals pi and h mu equals 1 gig electron volt, or 1,000 MeV. And can anyone remind me what is mass of the electron c squared? What's the rest mass of the electron? Sorry? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Right. While I'm usually against memorizing anything-- because that's what books and the internet are for-- this is one of those quantities that I want on the tip of your tongue as nuclear engineers. You should remember what the rest mass of the electron is because a lot of our quantities are calculated based upon it. For example, this ratio-- what was it, h nu over MeC squared-- gives you the energy the photon in terms of the number of electron rest masses, which is a useful quantity in itself. So why don't I plug all this stuff in. So we have .511 over 1,000 plus 2 flipped over the x-axis times 2 times h nu. And we get-- so that t becomes 999.745 MeV. Interestingly enough, 1,000 MeV, our ingoing photon, minus that equals that right there. So that's the interesting quirk of physics is as the photon increases in energy, the maximum amount of energy that it can leave with-- or sorry, the minimum amount of energy the photon can leave with, or the maximum amount it can impart to an electron approaches .255 MeV or the photon energy minus that. What do you guys notice about that number? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: That's right. It's exactly half the rest mass of the electron. So as your photons hit the GeV range and above, they can all leave with half the rest mass of the electron of energy, which means you have more and more energy able to be transferred in a given Compton scatter as the photon gets higher and higher in energy. So for the limit of low energy, the photons basically bounce off without transferring much energy at all. And the higher in energy the photon gets, the higher percentage of that maximum transfer can be. So that's what I wanted to clarify from last time. It was an interesting coincidence that our photo peak and our Compton edge were pretty close to that number apart. But one MeV isn't quite infinity. But it is pretty close. That difference right there, what does that come out to? Before I say something stupid, I'll just calculate it. 1.241. It's like 0.218 MeV, so we're already most of the way there. So once you reach, like, 10 or 100 MeV, you're pretty much at that limit. And so what that tells you is that the distance between a photo peak and its corresponding Compton edge for high energy photons is going to be half the rest mass of the electron. Once you get to lower and lower energies, that distance will start to shrink. I'm sorry, other way around. That distance will start to grow. And I have a few examples I want to show you. But first, in order to understand these, now I want to get into the part I told you we'd get to yesterday, which is what's the probability that a Compton scatter happens at a certain angle? And this polar plot explains it pretty well. In the limit of really low photons, like .01 MeV or 10 keV photons, you can see that this forward scattering to an angle of 0, or back scattering to an angle of 180-- there's a 180 that's blocked by an axis right there-- they're almost the same. So forward and back scattering, there's not really that much of a big difference in probability. So if we were going to start graphing theta of this Compton scatter as a function of the-- going to introduce this new quantity, this angularly dependent cross-section. Before we were giving you cross- sections in the form of just sigmas, like sigma Compton. Now we're actually telling you what's the probability of that interaction happening in this certain angle? So it's called the differential cross-section, and you can have all sorts of differential cross-sections, like energy differential cross-sections, angle differential cross-sections, whatever have you. So if we try and graph what dpes the shape of this look like, this polar plot on a more understandable graph, we can see that the probability is pretty high. Let's just call that a relative probability of 1. And as we trace around this circle, that value gets lower and lower until we hit 90 degrees, or pi over 2, at which point it starts to pop back up almost to its original value. So this was for a 10 keV photon. Now let's take a different extreme example. We have a 3 MeV photon right here. And it's that long dashed curve, so that one here in the center. So we can see that the relative probability of 0 degrees is the same. And if we trace around to 180, we're almost at the origin, which tells us that, for 3 MeV, it starts off the same and quickly drops really far down. What that means is that what's called forward scattering is preferred. So up on this board, we were talking about what's the maximum energy that a photon can transfer, which is always in the back scattering case. The other part to note is that that back scattering probability gets lower and lower as the photon gets higher and higher in energy. Yeah. AUDIENCE: So it's that basically saying, since we said forward scattering [INAUDIBLE] as the energy gets higher, you just have a harder chance of it interacting. MICHAEL SHORT: Exactly. Yeah. The cross-section value, well, yeah, it goes down as you increase in energy. So a total forward scatter, if you had a true forward scatter where theta equals 0, I'll call that a miss. It means that, because we saw on this formula up here when theta equals 0, there is no energy transfer. Nothing. And so yeah, that would be to me like a miss. If that angles ever so slightly above 0, then there is some scattering, but there's very little energy transfer. But that smaller energy transfer becomes more likely when the photon goes higher in energy. These are these sorts of cause and effect relationships I want you guys to be able to reason out. If I were to give you a polar plot of this differential cross-section with angle and energy, I'd want you to be able to reproduce this and tell me what's really going on. If we fill in one of the ones in between-- let's go with 0.2 MeV, this sort of single dashed line-- you can see that the probability of back scattering is somewhere between the 3 MeV and the 10 keV. Yeah, Luke? AUDIENCE: Back scattering refers to the photon going back. MICHAEL SHORT: That's right. Back scattering refers to the photon going back, which means this situation, where the difference in angle between the incoming and outgoing photon is pi, 180 degrees, which means it just turns around and moves the other way. Right. So that dashed line right there would follow something like this at-- what did we say? 0.2 MeV. The form, the full form of this cross-section right here is referred to as the Klein-Nishina cross-section. And it has been derived by quantum electro dynamics, which I will not derive for you now. But there are plenty of derivations online if that's your kind of thing. And I had to make a trade-off in this class of how deep do we go into each concept versus how many concepts do we teach? And I'm going for the latter because if there is any course that's supposed to give you an overview of nuclear, it's 22.01. There will be plenty of time for quantum in 22.02 and beyond, should you want. At any rate, this is the general form of it. And what this actually tells you is that as the energy of the photon increases, the effect of that angle will-- you guys'll have to work that out on a homework problem. I just remembered. I want to stop stealing your thunder and giving away half the homework. Yep. AUDIENCE: How does [INAUDIBLE]? MICHAEL SHORT: Yes. AUDIENCE: --the quantity D sigma D omega. MICHAEL SHORT: The quantity D sigma D omega says let's say you have a photon coming in at our x-axis. You've got an electron here. What's the probability that I'm going to scatter off into some small area d omega? So in some small d theta d phi, or some small sine theta, d theta d phi, into an element of solid angle. I should probably draw that smaller to be a little more differential looking. So gammas are going to scatter off in all directions. But this d sigma d omega tells you what's the probability that it goes through that little patch? AUDIENCE: And then that omega is also a function of [INAUDIBLE]?? MICHAEL SHORT: So that omega has some component of theta in it and some component of phi in it. Since it's a solid angle, it depends on both the angle of rotation and the angle of inclination, which we call theta and phi. Now to get from this to the regular cross-section you're used to, you can integrate over all angles omega of the differential cross-section, and you'll get the regular total cross-section, which is just what is the probability of Compton scattering, full stop. If you wanted to know, then, what's the probability of Compton scattering into this angle, it sounds kind of boring, right? Why do we care about the angle? Anyone bored yet? You can raise your hands. Be honest. Interesting. OK. Well, I'm going to tell you why it's not boring because I don't think you're honest. You can actually, if you know the angle at which a Compton photon scatters into-- actually I want to leave that stuff up-- there is a pretty much one to one relation between the energy and the angle of scattering, which means that let's say you have a cargo container. I'm sorry. You don't have a cargo container. You have a cargo ship full of tons and tons of these stacked up cargo containers. Has anyone actually ever seen one of these before? OK. In case not, I'm going to do something dangerous and go to the internet. And hopefully the search for cargo container doesn't come up with something disgusting. Oh, look at that. How about cargo container ship? Yeah. OK. You got one of these, right? And your detector goes off, and it just says there is something radioactive that shouldn't be here. How do you find out which container it's in without taking the ship apart? Interesting problem, huh? Do you just kind of look-- yeah? AUDIENCE: Do you kind of like shield certain angles [INAUDIBLE]? MICHAEL SHORT: That's one way. You could mask off the ship and move your detector around, and then do it for the other two dimensions. So that will get you there, but slowly. How do you do it quickly? Well, you do it with the Klein Nishina cross-section. If you know the relationship between the energy and the angle of a Compton scatter, you take two detectors, Detector 1, Detector 2, and you form what's called a Compton camera. This is awesome because with two detectors, you can pinpoint the location of a radiation source by knowing-- let's say you had a gamma ray that entered Detector 1. So you have your initial e gamma. And you get a spectrum. Let's draw a couple of spectra. I'm going to use different colors so I can make them bigger. This will be intensity and this will be energy. So we have our blue Detector 1, and we have some spectrum for Detector 1 where we get the photo peak of the gamma at e gamma. And this time, you don't see every possible angle. You only see whatever angle you get that Compton scatter at. Or you might see a whole bunch of different angles. Never mind. So you're going to see the Compton edge in this whole continuum of things. And so you know that whatever energy this corresponds to means that theta equals pi. That energy corresponds to theta equals 0. And you know that there's a source somewhere in here. Now let's say that photon scatters out of Detector 1 and into Detector 2. In this case, you've no longer just know that you have a source of some sort. You'll end up with a certain photo peak corresponding to this e gamma prime, the only energy that can Compton scatter in the direction from Detector 1 to Detector 2, because you have now determined the angles between the line between the detectors and the detector at the source. So then you get a photo peak and the corresponding Compton edge for your e gamma prime. Your e gamma prime, that tells you the angle that it came off of for your first interaction. So that is your source angle. So what that means is that by using these two angles, you've now pinpointed your source to lie somewhere on these two cones projected back on one of the two points where those cones at that angle intersect. This is something I want you to try and think about and work out on your own. But it's really cool to explain this because with one detector, you know that there's a source somewhere, and you know generally where to point. With two detectors and energy resolution, the energy of the photo peak of the second event tells you what the angle of the first event was. And this way, if you know what source you're looking for from the first photo peak, and you know what angle you're looking for from the Compton scattered photons photo peak, because they have those, then you know not only what the source is, but where it is. And you know which container should take a part to start looking. So this is why we care about angle, because there's actual, real ways of using this to your advantage to solve some pretty insane problems, like which container would that be in. Who's starting to get the general idea behind a Compton camera, or who would like another explanation? Anyone? I asked two questions. So who would like another explanation? Yeah, OK. The idea here is with just one detector, all you know is whether or not there is a source. The only information you're getting is its photo peak and Compton edge and bowl. And so you know the identity of the source. Maybe it's cobalt 60 or something. But you don't know where it is. By making a second measurement, you can then determine where on the Compton spectrum of the first measurement does the photon lie. So by saying OK, the photo peak corresponds to this energy, which corresponds to some certain angle that this had to scatter off of. So then you know what this angle is. You've determined theta. AUDIENCE: Is that the same theta that you just drew as theta angle? MICHAEL SHORT: Yes. OK. Angle. You've then determined this angle because you know the incoming path of the photon, and you now know the outgoing path, which means you know angle. So if you know the line between these two detectors, and your photo pick lines up with your first Compton spectrum's angle, you then look at that angle back. You also have to sweep around in the other direction, which means you end up with a cone of possible locations. Yeah. AUDIENCE: How do you know that the photons that get to the second scatter from inside the first one? MICHAEL SHORT: There can be only a couple of things that could happen, right? You could have another direct photon just shoot into Detector 2, in which case you'll just get a little bit of photo peak. But we know to expect that, so we can ignore it. We're specifically looking for the photo peek coming from Detector 1. AUDIENCE: So they wouldn't just go in the air and do Compton scatter [INAUDIBLE]?? MICHAEL SHORT: They would, which gives the perfect pretext to bring up the cross-sections for these processes. So I'm going to skip ahead a little bit and start getting into what do the actual cross-sections look like for Compton scattering, photoelectric, and pair production? All of them have to do with the electron density of the material. The more electrons in the way, the more you get these events happening. So yes, you will get Compton scatters in the air because air contains electrons and they do Compton scatter. But air is not very dense, so you will get comparatively less. So let's say that adds a little bit of noise on the bottom, which is like any detector spectrum we've ever seen. There's always noise from all sorts of other processes we're not looking for. So now let's start to look at what the general energy and z ranges of each of these effects are. And we're going to recreate one of the plots that I showed you at the beginning of yesterday's class. So let's make a graph of energy of the photon and z of the material. And we want to try to map out where the following three processes are dominant. So tau, our photoelectric effect. C, which we'll call our Compton scattering. And kappa, which we'll call pair production. And these cross-sections do give us relative probabilities as a function of the energy of the photon and the medium they're going through, along with the actual density, that something's going to happen. So let's look at the form of these. The cross-section for the photoelectric effect scales with z to the 5th. Do you think that photoelectric effect will be more likely for low z or high z materials? AUDIENCE: High. MICHAEL SHORT: High z materials, right. So we know we're going to be in the top half area. And it scales with one over energy to the 7/2. So do you think this will be most likely with low or high energy? AUDIENCE: Low. MICHAEL SHORT: So lower energy. So we're going to fill in our photoelectric somewhat. Oh, I'm sorry. So we know it's going to be in this part of the z. We know it's going to be in this part of the energy curve. So let's fill in that area as photoelectric. Let's look at the other extreme, pair production. It scales with z squared, so is that going to be in the low or the high z area? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: I think still high, just the bigger the z, the bigger the cross-section, right? So we know that pair production is going to be in the high z area. Now how about the energy? It scales with the log of energy, but the energy is on the top. So will a low or a high energy give you more pair production? High. So pair production is going to be here, leaving everything else for Compton scattering. And if we jump back to the start of last class, we've reproduced from the cross-sections the same sort of plot that we saw before just from looking at the relative probabilities of each effect, which I think it's pretty cool. We can now do that with some basic physics knowledge. Last thing I want to fill in for Compton scattering is that the Klein Nishina cross-section, which is your differential cross-section as a function of omega, combined with the probability that you scatter into a certain angle and a certain energy-- because there's a one-to-one relationship-- ends up giving you your d sigma c over de, which is your energy distribution of Compton recoil electrons. And that's directly what leads to the shape right here, where if you have, let's say, 0.51 MeV, relatively low energy, there's-- let's see. What do I want to say here? Yeah. So as you go up in higher and higher energy, you get fewer and fewer Compton electrons, because like Jared was saying, the probability of a Compton interaction does go down with increasing energy, as we kind of reasoned out. But in addition, you get relatively more of these back scattered ones, which is kind of interesting. I have to think about that one. But this is the typical shape that you tend to see for Compton scattering. For high energy photons, you get a Compton peak and then a very long, shallow tail. And as you go lower and lower energy, it starts to sort of bounce back up. And yeah, Luke? AUDIENCE: When Compton scattering happens, are the electrons being knocked off the atom? MICHAEL SHORT: Oh, yeah. AUDIENCE: OK. So how is that different from the photoelectric effect? MICHAEL SHORT: The photoelectric effect is an absorption followed by an injection. In Compton scattering, the other photon is still intact. It just loses energy, gains wavelength. But the energy of Compton scattering is-- let's say, for MeV photons, it's on the order of hundreds of keV-- plenty to eject most of the electrons from an atom. So you always have to think about are you going to eject something? Are you above the work function? Work function for most atoms-- so we had a nice plot of that-- is in the eV range. So chances are, yeah, Compton scattering mostly is going to be ejecting electrons, too. That's the whole reason we can count them. If there wasn't an electron ejection, there would be no electron ionization cascade to count. So there's some sort of empirical or experimental proof that it does happen. And speaking of experimental proof, you can actually see that shape. In this case, this is a spectrum taken from two different gamma sources together, a-- what is it, 1.28 MeV and a 0.51 Mev. And you can see that the 1.28 MeV, first of all, has a way lower number of counts, showing that the cross-section does, indeed, decrease with higher energy. And it doesn't have-- well, you can't really even see-- but it does not curving back up, whereas this lower energy Compton photon is scattering more often because it's much higher, and it's got that bump back up at the really low energies. So it's kind of neat when the math that we're looking at, if I jump ahead to the cross-sections, you can see that you'd expect the Compton scattering to decrease with energy. And you look at an experimental plot of two different energies, and there you have it, higher energy, less total Compton scattering, but different shape. So any questions before I move on to how you can use them to design shielding? So what we're getting here, these cross-sections, are probably better known to some of you as mass attenuation coefficients, which are simpler ways of describing how many photons in a narrow beam would undergo some sort of process and be removed from the beam. And they get removed exponentially for the same sort of reason for pretty much everything in this class ends up being an exponential, doesn't it? Where if you have some intensity of photons or some change in intensity that's proportional to a change in x in the initial-- let's see-- and some-- what is it, concept of proportionality like the cross-section, or we'll call it now mu, this mass attenuation coefficient. The answer to that ends up being pretty much the same. And it's this simple exponential thing. And the nice thing is you don't have to calculate all these different cross-sections because they're tabulated for you by NIST. And that's one of the links on the website, on the learning modules website that I want to show to you guys now. You can actually look up these total summed cross-sections for gamma ray interactions as a function of energy versus their mass attenuation coefficient in centimeters squared per gram. The reason it's in that unit is it just tells you what the material does. It doesn't tell you how much you have in the way. And that's why I've rewritten this exponential attenuation formula with a rho on the bottom and a rho on the top, that rho being the density of the material because usually you can just say, it's like i naught e to the mu x. But these are the things that you'll look up in tables. And these rhos right here is whatever density you have of your material. So if you want to calculate how much better cold water is shielding than hot water because of its change in density, you can then look up the value for water, which I want to show you how to do right now. Back up to see the actual site. So if you guys looked right here, these-- what is it-- NIST tables of x-ray absorption coefficients. I'll show you how to read through this table now because you'll need it from everything from problem set 5 to the rest of your life. This is one of those places you're going to go constantly looking for nuclear data. So you can either look at elemental media or compounds and mixtures, water being one of them. So let's go down to water, liquid. If you notice what's actually given here is this view over row. So the nasty, what is it, density specific mass attenuation coefficient. In other words, it's a cross-section, really. It's a microscopic cross-section. I don't know why, in a lot of these courses, they're introduced separately because they're the same thing. They're interaction probabilities. And then that other rho from the slides just tells you how much is in the way. That rho times x just tells you how much water's in the way in terms of how dense it is and how thick your water shield is. So using these tables, if you know, let's say, you're sending in one MeV photon, so we look up 10 to the 0 MeV-- you then have this value, this nice round value of 10 to the minus 1 centimeters squared per gram. You then multiply by the density of the water that you have multiplied by the thickness of your water shielding, and you get the change in intensity of the photons. And let's do an example calculation just to make it a little more real. So let's say we have a beam of photons of intensity i naught. And we're sending it through a tank of water. And the question is, do you want to keep this water at 0 Celsius or at 100 Celsius? What's the difference in shielding between freezing and boiling liquid water? Well, we can look that up. And let's say we have to specify an energy of the photons. We'll call it 1 MeV photons. So we can look up and say at 10 to the 0 MeV, we go over. We get just about 0.1 mu over rho. So mu over rho equals 0.1 centimeters squared per gram. And now we can set up two equations, one for 0 Celsius and one for 100 Celsius. So we'll say i at 0 C equals i naught. E to the minus 0.1 times rho at 0 Celsius times x. Let's say we have, I don't know, 10 centimeters of water. So we'll just put a 10 in there. That works out pretty well. And then our i at 100C is the same i naught times e to the minus 0.1 times rho of water at 100 times 10 centimeters. And keep in mind here, I made sure that since my mu over rho units are in centimeters squared per gram, I'm putting in x as 10 centimeters because whatever is up here in the exponential has to be unitless. That's a good check to see why are my calculations off by a factor of a billion? Just check the units in the exponential. Yeah. AUDIENCE: Wouldn't the value of 0.1 change for your cross-section [INAUDIBLE]?? MICHAEL SHORT: The value of 0.1 will change depending on the energy of the incoming photons. AUDIENCE: I mean, for the density. MICHAEL SHORT: Nope. AUDIENCE: Density changes, right? MICHAEL SHORT: Density changes. And that's what we account for here with this rho. So next up we have to look up the densities of water at 0 and 100 Celsius because I don't actually know them. So density of water at 0C-- oh, surprise, surprise-- it's we'll call it 1 gram per centimeter cubed. Now what is it at 100C? Too close to tell. Actually. That's interesting. At 0 it's a little lower. AUDIENCE: No, I think that-- MICHAEL SHORT: I think that site's wrong. AUDIENCE: It's definitely lower at 100C. MICHAEL SHORT: Yeah. AUDIENCE: I don't think that second Google result is talking about 100 degrees C. MICHAEL SHORT: Let's look at some steam tables because this is a real place to look for them. So water is 100. Celsius atmosphere is-- OK, we'll just see that. I think they're down here. Oh, the chalk's not letting me use the touchpad. That's kind of cool. AUDIENCE: [INAUDIBLE] water at 100 degrees Celsius? MICHAEL SHORT: Oh, there we go. Yeah, I think I got it. So what's the density of saturated liquid? 0.958. 0.958 grams per centimeter cubed. Wow, I actually used the last corner today. Awesome. So you can actually, then, go ahead and calculate because-- I love how this worked out, right? 0.1 in 10, cancel. 0.1 in 10, cancel. So we'll just do e to the minus 1. And we get that this i is about 36.79% of the gammas will be transmitted through 10 centimeters of water. And here, e to the power of negative 958, we get 38.37% transmission. So actually, about 2% more of the gamma is will be transmitted if the water is hot. It's a neat little calculation that you can do. Now we've looked at a really fine, or very small magnitude example. Folks came to me yesterday saying we want to design a new type of medical x-ray apron because we're worried that people carrying around all this lead, their backs are hurting and it's making surgeons' lives difficult if they're doing radioactive procedures. Can we do any better? Can they do any better? What do you guys think? Can you beat physics when it comes to mass attenuation? It's going to be awfully difficult. And the best weapon you have is these mass attenuation coefficients to look at their relative values. Now these, again, are in centimeters squared per gram. So this actually ranks aluminum to uranium in a sort of like per atom basis. It has nothing to do with their higher densities, which only help things. This just tells you how effective each of these elements is relatively at blocking gammas of different energies. Then, to get the total amount of attenuation, you multiply by the density. Aluminum is pretty sparse. Lead and uranium are pretty dense. There's not too many ways around this problem. In fact, I wouldn't say that there's a way around this problem. The best thing you can do is look at the really only interesting features on these curves. Does anybody know why there's those jagged edges there? Well, let's take a look at some trends. For uranium, the jagged edge is at about 110 keV. For lead, it's like 80 keV. For tin, it's probably more like 50 keV. It's decreasing with z. Anyone remember what sort of magnitude? We've looked at things like this before. And if we're talking about photon electron interactions, what could be responsible for those sudden jagged edges? Well, we have talked before about all sorts of different decay methods, including those that can eject electrons from different energy shells. You're looking at the same electron energy shells. If you have a, let's say, photoelectric capable photon entering a calcium nucleus-- and let's go look at calcium as an example. So I'll go to the tables of coefficients. I'm going to back up to elemental media. And I'm going to go to calcium for a simple example. Calcium has got this jagged edge right here. And if we draw a line down, it is precisely 4 keV. 4 keV, I bet, is going to be the k edge energy, the energy of the most inner bound calcium electron. To find out, we can go to the other NIST page that I linked you guys to, the NIST X-ray Transition Energy Table. And let's look at calcium. Wow, this really doesn't work with chalk on your fingers. And let's look at the k edge to check. Lo and behold. 4.05 keV. So what you're seeing here is the photoelectric peak k edge absorption. What this says is at energies below 4 keV, you can't inject the innermost electron. You just don't have enough energy. As soon as you hit 4 keV, those inner shell electrons become accessible to you. So the cross-section suddenly jumps up because you have more electrons that you can inject photoelectrically. Beyond 100 keV or so, there's no more jagged edges because any photon above 100 keV can access pretty much any electron in any element, except maybe the super heavy ones, and we don't have data for them yet. So then you might ask, well, there's going to be an L edge for calcium. Where would that be? Probably off this chart. But you can look up where it would be. So we'll go to the NIST-- yeah. So you had the right idea, Dan. To the left, right? Yeah. Exactly. So now let's look up the L edge. So if I were to ask you-- wow, it really doesn't work with chalk. OK, that's better. So the L1 edge is down at 438 eV, which is indeed off the scale for this graph. This bottoms out at 1 keV. But if I were to ask you to draw the full mass attenuation coefficient for uranium, I'd expect to see a k edge, an L edge, an m edge, and an n edge corresponding to shell levels 1, 2, 3, and 4. And where do you get that data? You get it from here, from this NIST databases. Or you calculate it one at a time using that Rydberg formula, where that n final goes to infinity. So you can either calculate them if you don't know. Or if NIST doesn't have them in the table-- and I don't think they have the n edge-- wow, they go all the way up to fermium. Let's do uranium. What do they go to? Yep. They don't have the m or the n edge. But you do know how to calculate them, is with that formula. And so if we were to construct any old mass-- what is it, mass coefficient-- good, we have a little space left. It's going to look generally like this. There's going to be a photoelectric region. Let's say that's going to correspond to our photoelectric cross-section, which goes way up with lower energies. There's going to be a pair production part, which goes up with higher energies. And there's going to be this kind of decreasing Compton cross-section. And if you kind of dance these curves, you end up with a shape like that, which is just like all the other mass attenuation coefficients that you see. So this is why they take the shape that they do if you add up the cross-sections for photoelectric effect, Compton scattering, and pair production, and you just kind of bounce on top of those, you end up with the mass attenuation coefficient. And the part that's not shown here is that this photoelectric effect will have some jagged edges whenever you hit an electron energy transition level. So it's five of. Want to see if you guys have any questions on photon interactions with matter. I know it's a lot to throw at you at once, but I'm going to be giving you guys lots to calculate, to try it out and to learn what's going on from a more hands on point of view. Yeah. AUDIENCE: So you can't, I guess, beat physics by increasing density of materials. Is there a way to slow down gammas? MICHAEL SHORT: Is there a way to slow down gammas? AUDIENCE: Besides relying on [INAUDIBLE]?? MICHAEL SHORT: Yeah. So the question is, can you slow down gammas without putting stuff in the way? Well, then, what are you doing? You've got a vacuum, right? So-- hmm. That's probably a deeper question than I think. So gammas, for example, do have indices of refraction and materials. Gammas are just photons. They're just really high energy. And they do have indices of refraction that are usually around one part per million, or like 1.000001 or so. So you can refract or bend gamma. Just not very well. So the question is, could you do something to stop the gammas that were maybe 10 feet away? The answer is physics. Not much you can do. But if they're, like, planetary levels away, it's possible that you could bend them away from an object, just like you can bend visible light away from something closer up because it's got a much higher index of refraction. Pretty crazy stuff. Did you ever think of gamma ray as having indices of refraction and behaving like regular light? It's just regular light. It's just really high energy light. So any other questions on the photon interactions with matter? Cool. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 1_Radiation_History_to_the_Present_Understanding_the_Discovery_of_the_Neutron.txt | NARRATOR: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MIKE SHORT: OK, guys. Welcome to the first filmed and hands-on installation of 22.01, Introduction to Ionizing Radiation. I'm Mike Short. I'm the department's undergrad coordinator. I'm also your 22.01 instructor. But I also want to introduce you to Amelia Trainer in the back, who one of the three TAs for the course. She took it last year. Everything is still very fresh in your head, I bet. AUDIENCE: More or less. MIKE SHORT: Cool. So she'll be-- she and Ka-Yen Yau and Caitlin Fisher will be with us all throughout the term. So if there's something that you don't like my explanation for, you've got three people who just took the course, and struggled through my own explanations, and can say it in a different way. So let's start off by taking your knowledge of physics from the roughly 1800s education of the GIRs, the a General Institute Requirements, up till 1932 when the neutron was discovered. And I would argue that this particle is what makes us nuclear engineers. It's the basis behind reactors. It's what differentiates us from the high energy physics folks and everything, because we've studied these and use them quite a lot. And so we want to retrace Chadwick's steps in discovering the neutron. And this is the only time you're ever going to see me have a bunch of words on a slide. It's not a presentation technique I like, but this paper is awesome in the clarity and expressiveness of him saying I ran this experiment and found something unknown. I'll use basic conservation of energy and things you learned in 8.01 and 8.02 to prove that it has to be a neutron, that a neutron must exist. It's elegant and brilliant, and I want to walk you guys through it. Did any of you get a chance to read the Chadwick article yet? OK. I'll show you where that is, because hopefully by now you're all aware that we have a learning module site. It's where I'm going to post everything. It's where you're going to submit everything for the class. But I'll save to the end of this class to go through the actual syllabus because I want to get into the physics. So let's bring your knowledge from classical mechanics and E&M up till about 1895 when Wilhelm Roentgen used X-rays and used them to, well, image something for the first time ever. Showing the contrast between bone and tissue, he was able to illuminate the bones in a hand. And then about a year later, the X-rays got a whole lot better. So by then, it was known that there were high-energy photons that had differential contrast between different types of material. A year later, after the nicer X-ray, J.J. Thompson conclusively proved that there is an electron by taking these cathode rays, as they were called at one point, and sending them through two charged plates. And he was able to show a slight deflection. So these cathode rays, as they pass through an electric field, change direction a little bit. And from the change in direction, you may not know the mass or the charge, but you can get the mass to charge ratio. Because if you guys remember from 8.02, from electricity and magnetism, as a charged particle passes through an electric field, it's deflected. And the amount of that deflection, or the curvature, is based on the mass to charge ratio. So Chadwick knew that electrons existed. This was a known thing, as well as alpha, beta, and gamma rays. So the electrons that came out of the nucleus were later renamed beta rays. And at around the same time, Ernest Rutherford and Paul Villard, working in Canada and France, discovered that there are some heavy charge particles that have very little penetrating power, while Paul Villard discovered that there are some other radiations-- I think he called it produced by disintegration of nuclei-- that have very high penetrating power. And they named them alpha, beta, gamma in order of their penetrating power or their range. And so it was later figured out that these were also high-energy photons. So this is something to note is that gamma rays, x-rays, light, whatever, it's all photons. However, once this pops back up, gamma rays emanate from the nucleus. So when we refer to a gamma ray, we mean a photon that came out of a nuclear interaction or a nuclear disintegration, not an electron transition. So this is one-- this is what makes a gamma ray a gamma ray, is where it comes from. Otherwise it's a photon. It behaves just like any photons. So what did Chadwick see in 1932? This is the first one-page article that he sent out to Nature to say, I found something weird. So he found out that when you take alpha particles from polonium-- so let's say we had a source of polonium sending off alpha particles, which I haven't told you what they are yet. It emits a radiation of great penetrating power when it hits a foil of beryllium. And it was not known what these things were. So in goes the alphas to beryllium. Something happens, and something comes streaming out that couldn't be explained by current theories. It was also noticed that when hydrogen was placed in front of it, when a piece of hydrogen in the form of wax, which contains a lot of hydrogen, was put in front of it, the amount of ionization increased, as measured by what's called an ionization chamber and an oscillograph, nothing more than an almost-sealed chamber, a piston with some charge on it that would then deflect. As it were to pick up positive or negative charges, it would move inwards or outwards and send an electrical signal to something like an oscilloscope. So this was a way that you could figure out how many ions were created by this highly penetrating radiation interacting in the ionization chamber. And they estimated that with the old theories, if this highly penetrating thing were a photon or a gamma ray, it would have to have an energy of 50 times 10 to the 6 electron volts, or 50 MeV. He said, OK. Well, if that's to be basically the experimental observation, say, a 50 MeV photon must be responsible for the ionizations that we saw. And so again, this is what the experiment looks like where you've got a polonium source naturally emitting alpha rays. They hit a foil, a beryllium. They produce what he did not know at the time was neutrons. We actually do know that beryllium produces neutrons pretty well. Beryllium is an interesting neutron multiplier. It undergoes what's called an n 2n reaction where one neutron comes in, two neutrons can come out, and it transmutes into something else. And we'll go over what this notation means, what these nuclear reactions mean. If you don't understand it, don't worry. The whole point of today is to open up questions that we'll spend the rest of the semester closing and answering. So again, if you're lost, don't worry. It's the first day of class, and it's your first day of Modern Physics. So not to worry. And this is an actual picture of what it looked like in the paper, a simple polonium source on a disk that was made by the natural decomposition of radium into polonium, a piece of beryllium, a vacuum chamber. Because it was already known that the alpha particles coming from polonium have an extremely short range. We're going to figure out why as part of this class. But without that vacuum there, the alpha particles wouldn't make it to the beryllium. So that much was known. What wasn't known was why are we getting so many ionizations. They attributed it to what they called a process similar to the Compton effect. To tell you what that is, in 1923, Arthur Compton figured out, among other things, Compton scattering, where a photon can strike an electron. The photon changes energy. The electron picks up some energy. They exit at very well-known angles, and they transfer very well-known amounts of energy. So this is how they knew how much energy the photon, if it were to exist, should have. And they said the process was analogous to Compton scattering because they said in this case, a proton would be ejected. It would take a lot of energy to eject a proton using a photon. And Chadwick saw this and said, well, if we ascribe this phenomenon to a Compton recoil, we should see about 10,000 ions. We actually saw about 30,000. So there was more ionization going on than can be explained by what's going on. In addition, those protons should have a range in air of about 1.3 millimeters, and they saw much more. So this is something simple-- theory and experiment don't match. There's got to be a different theoretical explanation if the experiment was correct. And so finally, what I love-- the last sentence in this-- the quantum hypothesis-- a quantum was the way they referred to a photon. It was called a quantum back then, a little packet of energy. Can only be upheld if we forget about conservation of energy and momentum. Now, I'll ask you guys from 8.01 to 8.02. So Sean, when can you throw out energy and momentum conservation? AUDIENCE: [INAUDIBLE] MIKE SHORT: That's pretty much right. You can't. A situation probably wasn't given to you where you can just throw away conservation of momentum and energy. In fact, nature gives us three quantities that we can measure and conserve-- mass, momentum, and energy. And throughout this course, if something is not conserved, you've probably got the math or the physics wrong. So this is something to remember throughout the course and our derivations and in your problem sets, is conserve mass, conserve momentum, conserve energy, just like what was taught in 8.01 and 8.02. So I'll call your answer correct. You don't remember a situation because, well, it didn't exist. And that's what Chadwick noted. He said theory and experiment don't work unless we throw out conservation of energy and momentum. Whether this was a kind of passive-aggressive thing to say-- well, this clearly can't exist-- or he was suggesting maybe it doesn't work, I don't know. I wasn't there. But later on, about a year later, he published a follow-on paper confirming the existence of a neutron by reconciling these differences in theory and experiment. So he restated what he saw before. This was the first paragraph of it. And again, it said that radiation excited in beryllium. Whatever happened after the alpha particle came out. It had a highly penetrating radiation, distinctly greater than that of any gamma radiation found from radioactive elements. Something is different. And I want us to take a sec to digest this. This is the part I actually want you guys to read, so take a minute and read through some of this stuff. And then we'll begin explaining his argument. Let me know when you guys are done reading. OK. I see some folks starting to look down. So let's take this apart and figure out what was Chadwick saying. He was saying that if a quantum was responsible for this energy, a photon, then we can write a nuclear reaction. I'll write it in the notation that we use now, which would be beryllium-9, the only naturally occurring isotope of beryllium, plus an alpha particle would lead to carbon-13 plus a gamma ray. And that gamma ray would take away the energy from this reaction. So now we can start to figure out, is energy conserved? Could this gamma ray actually exist? And if it does, does it account for the ionizations that Chadwick saw? So for each of these isotopes, we know a few different quantities. We know what's called its rest mass energy, which is this. It's rest mass times speed of light squared. This should look familiar to everyone. I've seen it on t-shirts all over campus. And it may take two or three weeks to really wrap your head around what Einstein's equation really means. It is that mass and energy are equivalent. You can express mass in terms of energy, and vise versa. And you will be doing so to conserve energy and mass in nuclear reactions, one of which is written right here. So if each of these things has a given rest mass energy, let's say a rest mass energy of beryllium and a rest mass energy of an alpha particle, and this alpha particle maybe had some kinetic energy-- it was moving pretty fast, so we'll give that the symbol t for kinetic energy, because that's what you're going to see in your notes and in the reading and everywhere. And then this carbon-13 nucleus has got to have a rest mass and a kinetic energy, and then this gamma ray, it's going to have some e gamma energy. Now, the question is, is the mass and energy conserved in this equation? What we're actually starting to write is what's called the q equation, or the universal mass and energy balance for any kind of nuclear reaction. So let's say we have a large initial nucleus i and some small particle i moving at it with some great speed. And after some reaction occurs, you have a small, final particle leaving and a different, large final particle leaving. They don't necessarily have to be the same. Let's give these particles designations 1, 2, 3, and 4. In the end, we should be able to write the difference in either total energy or total mass of the system as this value q. q is, let's say, the amount of energy that turns into mass, or vise versa. So let's say energy transfer. And so if we start writing some mass conservation equation, we can say that the mass of nucleus 1 plus the mass of nucleus 2 should equal the mass of nucleus 3 plus the mass of nucleus 4 plus however much energy from nuclei 1 and 2 turned into energy into 3 and 4. We could also write the same thing for their kinetic energies. In this case, the finals are on the end. So I'm sorry. I should use t for kinetic energy. So what this is saying is that if some mass has turned into energy at the end, that energy had to come from somewhere. It had to come from the initial kinetic energy or conversion of mass to energy from this reaction. And so notice that now, you can actually express the masses of the nuclei in terms of their energy, of their initial and final kinetic energies. And this right here is what we're going to be spending the first two or three weeks deriving, using, and exploring in order to balance nuclear reactions and explain why they are the way they are. So let's make sure-- we'll keep this nuclear reaction up here, because Chadwick proposed a different one to explain what he saw. And some of the evidence for this was that he put some aluminum foil in between the beryllium where things were being liberated and the ionization chamber and oscilloscope, or oscillograph, as he liked to call it. And that way, by putting more and more pieces of foil in there, you can deduce what's called the range, or the distance that the radiation will travel before it stops by losing energy through a whole host of different processes that we'll be working through together. If this were to be ascribed to a proton, then it should have had a certain range in air by this curve b right here. Instead, he found this curve a where things moved about three times farther than could have been explained if that were a proton to be liberated by all this stuff. So he's saying, OK, something has got more penetrating power. We know now that part of the reason for this is if there's a neutron, and there's no charge on it, then it's not going to interact with the electrons in matter. It won't even see them. Whereas protons or any other charged particles will see the electrons in matter and will interact with the electrons and the nuclei. So a little flash forward to say, we can explain this pretty simply with what we generally know. But this was the first time somebody had to come up with [INAUDIBLE] explanation, and it was quite hard. And so moving on, he can say, well, I know what protons should be injected from paraffin. I know a formula to describe what quantum or photon energy had to create them. And then instead, he says-- this is where his major hypothesis is-- either we relinquish conservation of energy or neutron or adopt another hypothesis. And this was already put forth by Rutherford back in the '20s that there may be a neutron, but there wasn't any proof. And this is what provided the proof. He gave an alternate nuclear reaction if there were to be a neutron which had roughly the mass of a proton. Then let's write a second one down here. I'm going to erase these extra notation, and we'll write the competing nuclear action below. And he said that-- let's say we start with beryllium-9 plus an alpha particle could instead become carbon-12 and a neutron. So I'd like to ask you guys right now to work this out. Are both of these reactions balanced in terms of mass? Are there the same number of protons, neutrons, and electrons at either side? And just to let you know, an alpha particle is better known as a helium nucleus. So that means that there's two protons. There's four protons plus neutrons, and beryllium-9 has four protons and nine protons plus neutrons. And carbon-12 has six protons. A neutron has zero protons. So in each of these-- and I'll fill in the other ones here. So that's a 4, 4, 2, and 6. Do we have the same number of protons and neutrons on both sides of both equations? I see a number of heads and one person saying yes, we do. So both of these reactions are balanced in terms of mass. The next thing to do is balance them in terms of energy. Now, they can both be balanced in terms of energy because you could attribute the change in the amount of mass from here to there and attribute that to the energy of the photon. That's when you'd have to have a photon of energy around 50 MeV. But if a proton-- I'm sorry-- a photon of energy around 50 MeV can't explain what we saw. Instead, if there is something like a neutron which also has its own rest mass and its own kinetic energy, and that neutron were highly penetrating, it could explain what Chadwick saw. And so the masses and things of these nuclei were fairly well known back then to, well, six significant digits based on some very careful experimentation. And all he did is say, all right. Let's take all of the energies in this reaction. Remember how I told you over here, you can write any nuclear reaction in terms of its kinetic energies, and the difference will give you the q value, which you can attribute to the conversion of mass to energy? That's what Chadwick did right here. He took the full reaction, saying here's the mass of beryllium, the mass of the alpha particle, the kinetic energy of the alpha particle. Note that he assumed that the kinetic energy of beryllium was zero. It was just sitting at room temperature. Does anyone know the approximate kinetic energy of atoms at room temperature? Order of magnitude, even? It's around 1/100 to 1/1,000 of an EV, or an electron volt. So when we're talking about beryllium, whose kinetic energy, we'll say, is around 0.01 EV, and the alpha particle whose kinetic energy was around 4 times 10 to the 6th EV, you can see why it's neglected. And you can do that too. You do not have to account for the initial kinetic energy of a nucleus at rest. This is the first approximation that we tend to make to the q equation to just have fewer variables. And don't worry if you don't remember this now, because we have a whole lecture on the q equation. And so finally, he said, all right. We'll subtract all the masses. We're left with the kinetic energies and a little bit of excess rest mass. That's got to be-- this has got to exist. And so this inequality has to be satisfied, which indeed it was. Using this inequality, he said that the velocity of the neutron has to be less than its kinetic energy if it had all of that energy, 3.9 times 10 to the 9 centimeter per second. Indeed it was lower-- not by that much, but it still satisfied this criterion. So things are checking out. That's pretty cool. He looked at another nuclear reaction that was known at the time. If you were to bombard boron-11 with helium, you end up with nitrogen-14 and either-- either end up with nitrogen-15 and a photon or nitrogen-14 and a neutron, explaining another reaction that wasn't as well known before. So I'd like to write this nuclear reaction, because I want you all to get very familiar with writing nuclear reactions. Let's say boron-11 plus an alpha particle-- we'll say it has a mass of 4-- becomes nitrogen 14 and a neutron. We also have a shorthand of writing this nuclear reaction which I'll use on the board for speed's sake. Usually if you put the initial nucleus and the initial incoming radiation, comma, the exiting radiation, and the final nucleus, these two right here are equivalent. This is just a shorthand for nuclear reactions. This is what you'll tend to see because it's a lot easier to write this shorthand and parse it visually than it is to parse a whole nuclear reaction. So I just want you to know if you don't know what these are, just remember to stick the arrow here, stick plus signs in for the parentheses, and you've got the same thing. And using these, you should be able to very quickly determine is this reaction balanced. What's actually going on? And there will be tabulated values of q values or energy amounts for these sorts of reactions in all sorts of tables I'll be showing you. And so finally, he figured out what the energy or the mass defect of the neutrons should be. Does anyone know what a mass defect is? This is another core concept. Let's say you were to want to make an atom of helium. So you would have to take two protons whose masses are very well known, and two neutrons, and bring them together. So if you were to have-- let's say the initial mass would be 2 times the mass of a neutron plus 2 times the mass of a proton. And the final mass is just the mass of a helium nucleus. You'll actually find that the initial mass does not equal the final mass. In bringing nuclei or nucleons together, they actually release what's called their binding energy. It's what keeps the nucleus bound together. There's a little bit of mass turned into energy. And so you know how we like say the whole is usually more than the sum of its parts? In nuclear engineering, the whole is a little less than the sum of its parts. It definitely is not equal. And Chadwick was proposing that a neutron should actually be made up of a proton and an electron in very close proximity. And since the masses of the proton and the electron were known, he said, well, if we bring the proton and electron very close together to have an overall neutral neutron particle, it should have roughly that mass defect or that difference between the energy of its constituent nucleons and the energy of the assembled nucleus. And you'll hear the words "mass defect" and "binding energy" used. Mass defect is in terms of mass in either-- you can give it in kilograms or in atomic mass units, or AMU, or in, let's say, MeV c squared. And you'll also hear of the binding energy just given in things like MeV. I want to show you where you can find these things now. I'll give you the single most useful website that you'll be referring to. And I've posted it up on the Learning Module site, so now is a good time for me to show you that the site exists. And let me just clone my screen real quick. It's a wireless HDMI thing, so it takes a sec to pop back up. Great. Has anyone not been to the site yet? It's OK. You don't have to be embarrassed. OK. About half of you. I recommend tonight that you start looking through the site. One, make sure that you can log in, because you'll need to log in to see some of the copyrighted materials that I've posted, and two, because this is where you'll be posting all your homeworks, getting the assignments, checking due dates. Especially if I postpone a problem set, I'll put out an announcement and post it here. So this is the place to look for everything. And in addition, I've posted a lot of useful materials for you guys. They're all at the bottom, and the top one is the [INAUDIBLE] table of nuclides. Anyone seen this kind of thing before? We have posters of it down on all the first-floor classrooms in Building 24. This is our go-to chart. When you want to find out all of the nuclear half-life, radioactive decay and decay of energy, probability of certain direction, whatever, this is where you go. So let's take a look at, well, helium-4 since we've been talking about it, better known as an alpha particle. And you'll notice a few different quantities visible. The atomic mass, 4.0026032 AMU. And this is another tip I want to give you guys right now. Don't round these numbers. That's one of the major trip up points. If you say that's approximately 4 or 4.003, you probably won't get the p-set questions right, because 1/1,000 of an AMU can still represent almost an MeV of lost energy. So let's say you have a nuclear reaction that liberates a 1 mega electron volt or one MeV gamma ray, and you get the fourth digit wrong in one of your mass calculations. It's like that gamma ray didn't exist, and you won't get the answer right. So again, word to the wise-- do not round. You'll also see what's known as the excess mass or the binding energy. So this binding energy right here, if you were to take two protons and two neutrons and bring them together and look at the difference in masses from, let's say, the same old formula as before, you would get a difference of 28,295 keV, or about 28.295673 MeV. Again, don't round. Let's figure this out right here. So we have 28.295673 MeV. And there is a conversion factor that you should either memorize or write down. Either way, it's good. It's about 931.49 MeV per atomic mass unit. This is your mass energy equivalence that you'll be using over and over and over again. And again, don't round. Those last two digits are important. So by taking this energy and dividing by this conversion factor, you can figure out how many atomic mass units are lost in terms of actual mass when you assemble an alpha particle from its constituent pieces. And the rest of the stuff we will get into later. It's not really relevant to today's discussion, but it's definitely relevant to today's course. Cool. OK. And then on to-- one of the last things that he mentioned is some predictions to say, OK, let's say this neutron exists. It doesn't have charge. Most matter interacts with other matter by virtue of Coulombic or charge interactions. If the neutron has no charge, it shouldn't really see matter except for nuclei. This is exactly what he said, is an electrical field of a neutron will be extremely small except at small distances, because he proposed that a neutron is a proton plus an electron. So once you get to around the radius of the neutron, you might start to see some charge, but not before. And so most other matter, unless you have a head-on collision with a nucleus, neutrons won't see it. And that helps explain why the neutrons had such high penetrating power or high range-- because they just went streaming through most materials, invisible to the electrons. So very forward thinking, and turned out to be very correct. And then finally, as a kind of mic drop conclusion, came up with the final concluding statements. OK, we know there's a neutron. We know its mass. The actual mass of the neutron is about 1.0087 AMU so within 0.1% of Chadwick's calculations and predictions based on 1930s equipment, which is strikingly awesome. And there you have it. That's the discovery of the neutron using most of the concepts that we're going to be teaching you here in 22.01. So right now, I'd say your scientific knowledge, if we don't count what you read on the news, is roughly around 1850 when all the E&M stuff was being figured out. We are going to bring you screaming into the 1930s. And by about month 1, we'll hit the present day when we can start to talk about the super heavy elements like the ones that were discovered last year. I think there was even some this year. But we'll look at the Physics Today article from last year to get to the point of explaining why super heavy elements might be stable. Why are we even looking for them? Where do cosmic rays come from? How do we know that they're cosmic rays and not something else? How can you tell a reactor turns on anywhere in the world by measuring different bits of radiation, which is an active defense project that folks are pursuing right now? Lots of really fun questions. And speaking of questions, do you guys have any questions about what we've explained here, how we've retraced Chadwick's discovery of the neutron from basic nuclear science principles? So who here has seen these nuclear reactions before? Cool. This is something that I hope folks would cover in high school. But with a general trend of watering down science education, I didn't want to make any assumptions. I'm glad to hear this was covered. Was this coveted at MIT? Are you guys relying on high school knowledge? OK, good. Not good. Good I know where you are. Not good that MIT doesn't teach anything nuclear until year two. That's OK. You guys, along with the Physics Department, will get at least a 20th-century knowledge of physics and 21st by the end of month 1. So I want to come back to the Stellar site, and specifically the syllabus. I've taken a lot of care to write a very detailed syllabus of what we're going to do, what I expect of you guys, what you can expect of me, and what we'll be doing every single day. So if you want to know what we're going to be doing, if you have a class that you miss, and you want to know what notes you're going to miss, it's all written up here. I want to get right into assignments, because everyone wants to know what am I responsible for. Well, not too much-- nine problems sets, three quizzes. The final exam is just a quiz. It's only worth 24% of your grade instead of 20 to get the math to work out, because I eliminated one problem set to avoid running afoul of MIT regulations, but not assigning things at the last week of class. But there are three quizzes, so the final exam is just another quiz. It's not a super high-stress, crazy thing, because I don't see a point in doing that. You can make your assignments however you want. I don't care as long as I can read them. But I do ask that in the end, you submit a PDF file on the Stellar site. And the reason for this is my first course that I ever taught at MIT as a professor were the graduate modules, 22.13, Intro to Nuclear Systems, and 22.14, Intro to Nuclear Materials. I accepted paper submissions. And by week 3, I had to microwave them. Because three or four times, I definitely saw blood. And there were also some weird stains that I didn't want to explain. So I added the habit of unstapling, microwaving, and re-stapling the p-sets before grading. So in the digital world, it's sterile. I'm not a germophobe. I just don't like blood in my house, especially if it's not mine. So I ask that you guys submit PDFs on the Stellar site. They're due at 5:00 PM to make sure that you're done and you can go home and relax or work on something else. I used to some have things due at midnight, and I had every submission was 11:59 PM. I'm not going to do that anymore. Do make sure to submit 15 minutes early. So if your computer or the Stellar site has trouble, send me an email or a text or whatever saying, I'm trying to submit, and it's not working. Here is a backup, or I'm leaving something under your door. And if you want my cell phone number, that's also my office number. It's also my only number. It's in the MIT directory. So if there is some emergency you need to make me aware of, please do communicate. I'd rather you tell me than be worried about not telling me and then find out later. So are we all clear on that? As far as what the assignments are, each assignment is going to be about 50% basic calculations, working out things like these to make sure you've mastered the material, that you understand writing nuclear reactions, you can balance a q equation, you can tell me about what your cancer risk would be from a certain dose of material. So this is like when you go out in the real world, the sort of calculations everyone would expect you as a nuclear engineer to be able to do. And then 50% of each problem set is either going to be analytical questions of considerable difficulty. This is MIT. We're not just here to give you the basics so that you can regurgitate a textbook onto the first person who asks. We're here to make sure that you can go farther. Because you guys are the future of this very small and diminishing field at the moment if you look at the nuclear power in the US. I would say growing in terms of the world, but not in terms of the US. And you guys are going to be in charge of leading this field and determining where it's going to go. So you've got to be up at the cutting edge, and we're going to take you to the edge of your abilities. My favorite kind of problem is to give one sentence for the question, five or 10 pages for the answer if you don't get the trick. Now, that's OK. It's perfectly fine not to figure out the answer in the end. In fact, I'll usually give you the answer for the analytical questions because I want to see your approach. I'm not interested in you nailing the answer. I'm interested in seeing how you think. And copious partial credit will be given for the way you think. So if you have a missing step, and you say, I don't know the step, I'm going to assume variable a and keep going, you will get credit for the subsequent steps. I want to see how you think from start to finish and how you cover for holes that you can't get through. So everyone clear on that? Partial credit, yes. Use it to your fullest ability. The other half of the problem sets will have take-home laboratory assignments. It's not just enough for me to tell you about nuclear engineering. You have to see it for yourself, and you have to feel it for yourself. And once in a while, you'll get a mild electric shock by yourself if things go wrong, but that's OK. It happens to the best of us. I got zapped by our-- you guys have all made Geiger counters, right? Has anyone not made a Geiger counter yet? Oh, OK. It sounds like we need to run another workshop. Well, our Geiger counters rely on a neat little boost converter power supply that takes 9 volts and steps it up to 400 volts via some switching things. That means you have 400 volts on a big metal tube. And if you're working on your circuit and you happen to brush against it, you get zero current, so it doesn't hurt you in the medical sense, but it hurts. I also have a dance I call the 60 Hertz shuffle. It's the high speed shaking that you do when you're connected to 60 Hertz somewhere from the wall outlet. None of you guys will be exposed to this, but I've done it enough times that I have a name for the dance. If you get 400 volts, you'll just kind of scream. And I don't care how manly men you guys are. Everyone makes the same pitch scream with 400 volts. We're all equal in the eyes of electricity. For these laboratory questions, I'm going to ask you to both complete an assignment where you'll, for example, measure the half-life of uranium, measure the radioactivity of one banana, confirm or refute the linear no threshold hypothesis of the dose. And the experiment itself won't take that long, but I want you to write it up in proper documented format using these sections. So I'm going to be teaching you guys how to write scientific articles. So actually, this is kind of a good time to ask you guys. How would you define the word "science?" Luke, what do you think? AUDIENCE: It's a process of getting knowledge by fitting theories to empirical evidence. MIKE SHORT: Gaining knowledge by fitting theories to empirical evidence. OK. So I hear knowledge gaining by some sort of well-justified and accepted means, right? Monica, what do you think? AUDIENCE: Science is the study of the natural world through patterns and mathematics, I suppose. MIKE SHORT: Cool, yeah. Let's say the studying, modeling, and abstraction of the natural world into ways we can understand. Jared, what would you say? AUDIENCE: Which one? MIKE SHORT: Oh, there's two Jareds. I want to hear both, and then I'll-- yeah. AUDIENCE: Science is-- I'd probably go with it's the same thing Luke said, gaining knowledge through experimentation and trial. MIKE SHORT: Cool. And other Jared? AUDIENCE: I think what Luke said about fitting theories to empirical evidence and testing them that way. MIKE SHORT: OK, cool. I like these. And these are the generally accepted theories and descriptions I've heard of science. And I want to pose a question to you guys. If a tree falls in the woods and nobody is around to hear it, can it win the Nobel Prize? It's kind of an expression. So if somebody discovered the neutron, and they wrote up their findings, and proved that it exists, and they put it in their desk, and the house burned down, and the person died, was the neutron discovered? What does discovered mean? So to me, science is equal parts everything you guys said and communication. If you discover something and you don't tell anyone, the information technically doesn't exist. It dies with you. And you don't want that to happen. So I want to make sure that you guys both understand the science and understand the importance of communicating it effectively to people. Because that's the other thing you're going to be doing as leaders in this field is explaining things. You better believe when Fukushima happened-- I was a postdoc at the time. I was not a person, I guess, in the academic sense. People here treated me very well, but I was also very aware that I was not one of the greats. Still I am not old enough yet. I was getting calls all day, all night from news agencies saying, you're at MIT. I saw your name on the directory. Do a radio interview and tell us all if we're going to die. And you can only imagine what the professors on this hall were dealing with. So folks were traveling around, answering things left and right. I ended up doing some weird podcast on a Brazilian news channel that I don't think ever got aired and stopped doing it after that. You as undergrads even might be called if somebody wants to know something. And so it's best that you not only know the material, but you can convey it effectively, briefly, and in a way that your audience can understand. The audience for these articles is any undergraduate in any engineering program anywhere. That's your lowest common denominator-- not to say that that's a bad thing, but it is the audience that you want to aim your writing at. So what I want you to be able to do is say what you did, why you did it, and what it means. In communication terms, this means a less-than-100-word abstract, a very brief synopsis of what you did and why it's important. That's the teaser. This is the trailer to make somebody read what you actually did and see why they care. This is the main method and currency through which scientists communicate is articles of this type. An introduction and background which says why are we studying this problem. And the answer is not because I told you to, and your grade depends on it. I want you to think about why this problem is important, and put it into context, and give any of the scientific background to understand what's going to come next, like the experimental section. Describe what you did in nitty gritty scientific detail. This is usually the easy part. I put this gamma ray in this bucket, and it made this color, and I made this noise, whatever. A results section where you show all of your data and a discussion section-- notice that these are different. You want to separate your actual results from your interpretation of your results, because someone else may have a very different interpretation of results-- for example, Chadwick. Somebody found that beryllium bombarded by alpha particles emitted radiation of great penetrating power. That's the result. The interpretation or discussion said it's probably a Compton-like effect from a photon. By separating your results and your discussion, you allow people to mentally say, OK, I get your results. I believe that you found these numbers. I have a different explanation. And you all may have different explanations for what you see in your own labs, because you're also probably going to get different results. And then finally, a conclusion where you quickly re-summarize your major contributions. Your abstract is the teaser. Your conclusion is like your re-abstract with the context that people now believe-- or don't-- what you did. And think about how you guys read articles. So who here has read scientific articles before? More than half of you. Let's see. Alex, what do you read first? AUDIENCE: If it's a journal, probably the abstract. But given that I'm mostly interested in the topic, I tend to go to the conclusion section. MIKE SHORT: That's right. OK. I'm glad you said that. That was my next question. You read the abstract. The next thing you read is the conclusion. The next thing you usually read is you skim through the results and the figures and see if it's worth looking at. Then if you're like, OK, this is worth my time, then you slog through and read everything to make sure you understand it all. So when you're writing these articles, think about who's reading them and how they read them. Because if you guys don't tend to read an article from top to bottom, neither will your audience. And that's true. Most scientists skim things because we have a lot to read. So that's OK. And I am very interested in you guys completely documenting your experiment. Pictures are also awesome to use. Accuracy of results and analysis-- so did you round when you weren't supposed to? Did you have a clear numerical typo that you can't explain? And the readability of the report-- I want you to spend time making this readable. I expect that this part of the assignment will take roughly five hours, whereas the basic questions will take roughly three to four hours depending on how well you're doing. And that leaves three hours of class time and a couple hours for whatever else happens in life, let's call it. Since you've never written these before-- wait a minute. I shouldn't say that. Who's written these kinds of things before? Anyone here wrote a scientific article? Two. OK, three. Cool. So most of you haven't, and that's where I assumed you'd all be. We have a whole lab dedicated to scientific communication called the Comm Lab run by someone, who happens to be my wife, four doors down. We live and work next to each other. It's pretty cool. And you get an automatic three-day extension on the lab assignment if you go to the Comm Lab. There are three reasons for this. One, I want you to get better grades, so I want you to learn how to communicate. Two, I don't want to spend time trying to figure out what you were trying to say. So better articles means less grading time for me. And three-- OK, let's just say it's two reasons. That's enough. And for everything except for the quizzes, it's perfectly OK to work together as long as you attribute who did what, you write your own articles, don't Xerox anything, and say who took the data. So if the whole class wants to get together and take one set of data and work for that, fine. If you all want to do the labs yourselves, which I highly recommend, fine. But I'm not going to tell you how to do the lab assignment in this, as long as you say what you did. And I want all of you, if you haven't yet, to head to integrity.mit.edu to see our official policies on what is considered plagiarism, what is considered working together, what's considered academic honesty. I will assume, because it's on the syllabus and I'm telling you now, that you've all read this, and that there will be no cheating. It's just not something that's part of my job description, and I don't want to deal with it, which means I won't deal with it, which means the consequences will be severe. So I don't think I'll have to worry about that. And then for the late policy, it's just 10% of the value of assignment for each calendar day, not each business day. So if you're running really late and you haven't started an assignment the day it's due, better to take the 10% penalty and do really well than hand in nothing on time. So keep in mind how can you maximize the points in this course. I'd rather you hand in something good late than terrible on time. So if you really need that extra day if MIT gets crazy, take it. 10% of a problem set is 0.4 points on your grade. It's not that big a deal. Then as far as the syllabus, I want to show you very quickly. We've got when things are due. I'm going to change these dates to basically just shift them all forward by one day to account for the new Tuesday, Thursday classes. So I've got when the problem sets are due. And Friday is recitation activities. If there aren't too many questions on a particular Friday, I have a lot of fun stuff in store for you. For example, tomorrow we're going to be talking about radiation utilizing technology, including plasma sputter coders, one of which we have set up in my lab, and I'd like to show you. Because it's a way that you can coat materials and other materials, and you have to generate this beautiful, glowing purple plasma in order to do so. So you ionize nitrogen. You induce sputtering, which is a radiation damage process which we'll be going over, to coat things in other things. There will be once in a while where I have to shift a class into recitation because I'll be at Westinghouse or in Russia. And I think that's only twice during the whole year. So you won't miss any classes. We'll just use the recitation time. And then other, times we'll be doing measuring the radioactivity of banana hashes. Or once we talk about electron interactions, we're going to go use a scanning electron microscope. The carrot at the end of the stick to make sure that you guys do well-- the top two people performing on the quizzes get to pilot and choose the samples for the SEM and elemental analysis and the focused ion beam demonstration. So you guys get to pilot something that's, let's say, as complicated as a space shuttle but deals with things much, much smaller. So I'll put you in the driver's seat in the machines of our lab, and you get to bring whatever you want to analyze and find the elemental analysis of and use the world's smallest machining instrument that can cut 5-nanometer slices of things using processes that we're going to discuss in this class. So the better you do, the more you get to use it. And at the end, we'll have a nice debate. I call it arguing with Greenpeace when we'll talk about-- now that you'll have known all of the nuclear science and engineering and can speak scientifically about topics, we're going to go after a lot of societal misconceptions. Do cell phones cause cancer? Does living near a nuclear power plant cause cancer? Does arguing with Greenpeace cause cancer, whatever it's going to be? So I want to make sure that you're well-equipped and confident enough to go out there and hold your own in a vigorous debate with an angry, emotional environmentalist. You guys will be calm, peaceful, and informed environmentalists. After all, that's why a lot of us are here, is we want nuclear energy to happen because we care about the environment. There's other people that don't want nuclear energy to happen because they care about the environment. To each their own, I guess, motivations. But I want to make sure you're well equipped to also tackle things like is food irradiation bad. That there's all sorts of websites with dancing babies and weird Geocities-like graphics saying food irradiation is evil. You won't find a lot of scientific articles if that's the case. And to see if you'll put your, let's say, cancer risk where your mouth is, the last day of class, we'll have an irradiated fruit party where I'll be buying only the kinds of fruit that can be imported into the US because food irradiation is done. Otherwise, the USDA would not let it into the country. And this is mostly things like mangosteens from Thailand, pineapple from Costa Rica. And interestingly enough, Hawaii is considered a different country agriculturally. It is so far away that they have different agricultural pests. And without irradiation, we couldn't import some of the produce from Hawaii, because it could decimate some of the crops in the Continental US. Pretty crazy, huh? Yeah. It's the-- what, it's the 49th state, but agriculturally, a different country. So it's about 5 till, so I'm going to stop here. And we will start with radiation-utilizing technology on Friday, tomorrow, downstairs in Room 24-121. And then we'll move over to my lab at 2 o'clock to see the plasma sputter coder. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 6_The_QEquation_The_Most_General_Nuclear_Reaction.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, guys. Welcome back. As you can see, we're not using the screen today. This is going to be one of those fill-the-board lectures. But I am going to work you through every single step. We're going to go through the Q equation and derive its most general form together, which, for the rest of this class, we'll be using simplified or reduced forms to explain a lot of the ion or electron-nuclear interactions as well as things like neutron scattering and all sorts of other stuff. We'll do one example. For any of you that have looked at neutrons slowing down before, how much energy can a neutron lose when it hits something? We'll be answering that question today in a generally mathematical form. And then a few lectures later, we'll be going over some of the more intuitive aspects to help explain it for everybody. So I'm going to show you the same situation that we've been describing sort of intuitively so far, but we're going to hit it mathematically today. Let's say there's a small nucleus, 1, that's firing at a large nucleus, 2, and afterwards, a different small nucleus, 3, and a different large nucleus, 4, come flying out. And so we're going to keep this as general as possible. So let's say if we draw angles from their original paths, particle 3 went off at angle theta and particle 4 went off at angle phi. So hopefully those are differentiable enough. And if we were to write the overall Q equation showing the balance between mass and energy here, we would simply have the mass 1 c squared plus kinetic energy of 1. So in this case, we're just saying that the mass and the kinetic energy of all particles on the left side and the right side has to be conserved. So let's add mass 2 c squared plus T2 has to equal mass 3 c squared plus T3 plus mass 4 c squared plus T4, where, just for symbols, M refers to a mass, T refers to a kinetic energy. And so this conservation of total mass or total energy has got to be conserved. And we'll use it again. Because, again, we can describe the Q, or the energy consumed or released by the reaction, as either the change in masses or the change in energies. So in this case, we can write that Q-- let's just group all of the c squareds together for easier writing. If we take the initial masses minus the final masses, then we get a picture of how much mass was converted to energy, therefore, how much energy is available for the reaction, or Q, to turn it into kinetic energy. So in this case, we can put the kinetic energy of the final products minus the kinetic energies-- I'm going to keep with 1-- of the initial products. And so we'll use this a little later on. One simplification that we'll make now is we'll assume that if we're firing particles at anything, that anything starts off at rest. So we can start by saying there's no T2. That's just a simplification that we'll make right now. And so then the question is, what quantities of this situation are we likely to know, which ones are we not likely to know, and which ones are left to relate together? So let's just go through one by one. Would we typically know the mass of the initial particle coming in? We probably know what we're shooting at stuff, right? So we'd know M1. What about T1, the initial kinetic energy? Sure. Let's say we have a reactor whose energy we know, or an accelerator, or something that we're controlling the energy, like in problem set one. We'd probably know that. We'd probably know what things we're firing at. And we would probably know what the masses of the final products are, because you guys have been doing nuclear reaction analysis and calculating binding energies and everything for the last couple of weeks. But we might not know the kinetic energies of what's coming out. Let's say we didn't actually even know the masses yet. We'd have to figure out a way to get both the kinetic energies. And what about these angles here? This is the new variable that we're introducing, is the kinetic energy of particles 3 and 4 is going to depend on what angles they fire off at. Let me give you a limiting case. Let's say theta was 0. What would that mean, physically? What would be happening to particles 1, 2, 3, and 4 if theta and phi were 0, if they kept on moving in the exact same path? Yeah? AUDIENCE: Is it a fusion event, or [INAUDIBLE] PROFESSOR: We don't know. Well, let's see. Yeah. If it was a fusion event-- let's say there was one here and one standing still-- then the whole center of mass of the system would have to move that way. So one example could be a fusion event. A second example could be absolutely nothing. It's perfectly valid to say if, let's say, particle 1 scatters off particle 2 at an angle of 0 degrees, that's what's known as forward scattering, which is to say that theta equals 0. So this is another quantity that we might not know. We might not know what theta and phi are. And the problem here is we've got, like, three or four unknowns and only one equation to relate them. So what other-- yeah? Question? AUDIENCE: For forward scattering, when you say theta equals 0, do you mean they just sort of move together forward, kind of like an inelastic collision, and they just keep moving in the same direction? PROFESSOR: An inelastic collision would be one. And since we haven't gone through what inelastic means, that would mean some sort of collision where-- let's see. How would I explain this? I'd say an inelastic collision would be like if particles 1 and 2 were to fuse, like a capture event, for example, or a capture and then a re-emission, let's say, of a neutron. Yeah. If it was re-emitted in the forward direction, then that could be an inelastic scattering event-- AUDIENCE: Oh, OK. PROFESSOR: --but still in the same direction. Or an elastic scatter at an angle of theta equals 0 could be like there wasn't any scattering at all. Because really in the end, can matter-- let's say if you have a neutron firing at a nucleus, depends on what angle it bounces off of, in the billiard ball sense. If it bounces off at an angle of 0, that means it missed. We would consider that theta equals 0. But the point here is that we now have more quantities unknown than we have equations to define them. So how else can we start relating some of these quantities? What else can we conserve, since we've already got mass and energy? What's that third quantity I always yell out? AUDIENCE: Momentum. PROFESSOR: Momentum. Right. So let's start writing some of the momentum conservation equations so we can try and nail these things down. So I'm going to write each step one at a time. We'll start by conserving momentum. That's what we'll do right here. And we can write the x and the y equations separately. So what's the momentum of particle 1? How do we express that? AUDIENCE: Mass times velocity. PROFESSOR: Yep. So it would be like M1 V1. So we'll have a little box right here for momentum. We could say mass times velocity-- or, how do we express that in terms of the variables that we have here, like we did last week? What about in terms of kinetic energies? Well, another way of writing mass times velocity would be root 2MT. Because in this case, we would have root 2 times M1 times 1/2 M1 V1 squared. The 2s here cancel. Let's see. You have M1 squared. You have a V squared. And the square root of M squared V squared is just MV. So this is an equivalent way of writing the momentum in the variables that we're working in already. And so since that doesn't introduce another variable like velocity-- which we do know, but it's kind of confusing to add more symbols-- let's keep as few as possible. So what's the x momentum of particle 1? Just what I've got up there. 2 M1 T1. What's the x momentum in this frame of particle 2? 0. We're assuming that it's at rest. And now, what's the x momentum of particle 3? AUDIENCE: [INAUDIBLE] PROFESSOR: I heard a couple of things. Can you say them louder? AUDIENCE: Square root of 2 M3 T3? PROFESSOR: Yep. Root 2 M3 T3. But in this case, if we're defining, let's say, our x-axis here, it also matters what this angle is. So you've got to multiply by cosine theta in this case. And that's the x momentum of particle 3. And we've also got to account for particle 4. So we'll say add 2 cosine phi. Now let's do the same thing for the y momentum. What's the y momentum of particles 1 and 2? 0. They're not moving in the y direction to start. And how about particle 3? I hear whispers, but nothing vocalized. AUDIENCE: Sine? PROFESSOR: Yep. Same thing, but root 2 M3 T3 sine theta. And M4-- I almost wrote the wrong sign there-- has got to be a minus. If the momentum of the initial particle system in the y direction is 0, so must the final momentum in the y direction. So these two momenta have to be equal and opposite. And that's times sine phi. So now we actually have sets of equations that relate all of our unknown quantities. We have the mass conservation equation, we have the Q equation, we have the x momentum, and we have the y momentum. And from this point on, it's a matter of algebra to express some of these quantities in terms of some of the others. So let's get started with that. Because angles are kind of messy, and theta should uniquely define phi, let's try and get things in terms of just one angle. So I'm going to start by separating the thetas and the phis on either side of the equals sign, so that hopefully later on we can eliminate one in a system of equations. So all I'm going to do is I'm going to subtract or add the theta terms to the other side of the equation. So let's say we'll separate angles. So we'll have root 2 M1 T1 and minus root 2 M3 T3 cosine theta. I'll be depending on you guys to check for sign errors here because those will be messy. I do have notes in case, but I'm hoping I won't have to look at them. And all we have left on this side is root 2 M4 T4 cosine phi. So that's the x momentum equation. Let's do the same thing with the y momentum equation. So all we'll do is take the theta term and stick it to the left of the equals sign. So that would give us minus root 2 M3 T3 sine theta equals minus root 2 M4 T4 sine phi. Right away, we can see that the minus signs can cancel out, just for simplicity. And what else is common to these that we can get rid of? Yep? AUDIENCE: Square root of 2. PROFESSOR: Everything here has a square root of 2. So we'll just get rid of all of the square root of 2s to simplify as much as possible. And now we look a little stuck. But now is the time to remember those trigonometric identities back from high school that I don't think-- has anyone used these since? In 1801 or 1802, anyone used a trig identity? A little bit? OK. I would hope so. But I don't know what other people are teaching nowadays. At least this way I'll make sure you remember the high school stuff. We're going to rely on the fact that we already have got a cosine and a sine. We have a set of simultaneous equations. If we can add them together and destroy the angles somehow, that will make things a lot easier. So for the thetas, we have a cosine, a sine, and an unangled term that looks kind of messy. Here we have a cosine and a sine. Anyone have any idea where we could go next to destroy one of these angles? Anyone remember any handy cosine or sine trig identities? AUDIENCE: If you squared both terms, you could get square root of cosine squared, square root of-- sorry. You get cosine squared and sine squared and then you factor out the square root of M4T4 and then cosine squared plus sine squared equals 1. PROFESSOR: Exactly. So we can rely on the fact that if we can square both sides of both equations and add them up, we would have a cosine squared of phi plus a sine squared of phi, which also equals 1. So we can destroy this phi angle and make things a lot simpler. So we'll start by squaring both sides. Let's start with the x momentum equation. So if we have-- let's see-- root M1T1. So we're going to take that stuff squared. And that squared is not too hard. Neither are those. So we'll have root M1 T1 squared, which just gives us M1 T1, minus root M1 T1 times root M3 T3 cosine theta. Let's just lump those terms together as root M1 M3 T1 T3 cosine theta. Also, anyone, raise your hand or let me know if I'm going too fast. I'm trying to hit every single step. But in case I skip one, please slow me down. That's what class is for. OK. And then we've got another one. Let's just stick a 2 in front of there and plus that term squared. So we'll have M3 T3. Let's see. Yeah. Looks like cosine squared of theta. Yep. Equals-- this one's easier-- M4 T4 cosine squared of phi. OK. Now we'll do the same thing for the y momentum equation. Much easier because there's no addition anywhere. And we have M3 sine squared theta-- over here-- equals M4 T4 sine squared phi. So this is quite nice. Now if we add these equations together, we get rid of all of the cosine and sine squared terms. So let's add them up. Let's see. We'll add the two equations. Add equations. And let's try and group all the terms together. So we have M1 T1 minus 2 root M1 M3-- it's getting hard to write over the lip of the chalkboard here-- cosine theta. And we have M3 T3 cosine squared theta plus M3 T3 sine squared theta. Equals M4 T4 cosine squared theta plus sine squared theta-- or phi. I'm sorry. Cosine squared phi plus sine squared phi. OK. Hopefully that's as low as I'll have to write. And like we saw before, cosine squared plus sine squared equals 1. So that goes away. That goes away. And let's keep going over on this side of the board. I told you this would be a fill-the-board day. Let's see if we actually get all six instead of just the four visible. But I think we'll finish this derivation in four boards. So let's write what we've got left. Let's see. Remaining. So we have M1 T1 minus 2 root M1 M3-- so much easier to write standing up-- cosine theta equals M4 T4. Quite a bit simpler. AUDIENCE: [INAUDIBLE] PROFESSOR: Did I miss a term? AUDIENCE: The M3 T3. PROFESSOR: Ah. Thank you. You're right. You're right. And we had a plus M3 T3. Yeah, that would be important. Thank you. Equals M4 T4. So we now have a relation between the masses, the energies, and one angle, which is getting a lot better. We still have one more variable than we can deal with. So let's say if we're-- let's see. Which of these variables do you think we can eliminate using any of the equations you see, let's go with, on that top board over there? Well, what other quantities are we likely to know about this nuclear reaction? Let's bring this back down. Are we likely to know the Q value? AUDIENCE: Yeah. PROFESSOR: Probably. Because like you guys have been doing on problem sets one and two, if you know, let's say, the binding energies, or the masses, or the excess masses, or the kinetic energies of all your products, any combination of those can get you the Q value of that reaction. And if you just look up those reactions like, let's say, radioactive decay reactions, on the table of nuclides, it just gives you the Q value. So chances are we can express some of these kinetic energies in terms of Q. And all we've got left is T1, T3, and T4. So which of these are we most likely to be able to know or measure? T1, we probably fixed it by cranking up our particle accelerator to a certain energy. T3 or T4, what do you guys think? Let's say we had a very small nucleus firing at a very big one. Which one do you think would be more likely to escape this system and get detected by us standing a couple feet away with a detector? Yep? AUDIENCE: T3. PROFESSOR: Probably T3, the smaller particle. We've just arbitrarily chosen that. But for intuitive sake, let's say, yeah. Why don't we try and get T4 in terms of Q T1 and T3? That's not too hard, since it's addition. So our next step will be substitute. And we'll say that Q equals-- I'm just going to copy it up from there-- T3 plus T4 minus T1. So we can isolate T4 and say T4 equals Q plus T1 minus T3. And continue substituting. I usually don't like to have my back to the class this much. But when you're writing this much, it can be a little hard. So let's stick this T4 in right here and rewrite the equation as we've got it. M1 T1 minus 2 root M1 M3 T1 T3 cosine theta plus M3 T3 equals M4 times Q plus T1 minus T3. I anticipate us needing to see this side of the board soon. I also apologize for the amount of time it takes to write these things. There's another strategy one can use at the board which is defining intermediate symbols. And here's why I'm not doing that. When I was a freshman, back in-- whoa-- 2001. Who here was born after 2001? Nobody. OK. Thank god. I don't feel so old. I was in 18023, which was math with applications, which was better known as math with extra theory. And in one class, not only did we fill nine boards, but we ran out of English letters-- symbols-- and we ran out of Greek letter symbols, and we moved on to Hebrew. Because they were distinct enough from English and Greek. And being, I think, the only Hebrew speaker in the class, I was the only one that could follow the symbols, but I couldn't follow the math anymore. So I am not going to define intermediate symbols for this and just keep it understandable, even if it takes longer to write. OK. So let's start off by dividing by M4. Our goal now is to try to isolate Q. Because this is something that we would know or measure. And it will relate all of the other quantities, only one of which we won't really know yet. So let's divide everything by M4. So we have T1 times M1 over M4 minus 2 over M4 times root of all that stuff plus T3 times M3 over M4 equals Q plus T1 minus T3. And we've almost isolated Q. I'll call this step just add and subtract. And I'm going to group the terms together. So let's, for example, group all the T1s together and group all the T3s together. So if I subtract T1, I get T1 times M1 over M4 minus 1, minus 2 over M4 root M1 M3 T1 T3 cosine theta, plus-- and if I add T3, then I would get M3 over M4 plus 1 equals Q. So this is a good place to stop, turn around, and see you guys, and now ask you, which of the remaining quantities do we probably not know? So let's just go through them one by one, just to remind ourselves. Are we likely to know what T1 is? Probably. How about the masses M1 and M4? If we know what particles are reacting, we can just look those up, or measure them, or whatever. We know M4. We know our masses. We know T1. What about T3? We don't necessarily know yet. So T3 is a question mark. How about cosine theta or theta? We haven't said yet. And T3 we don't know. And the masses we know. And the Q we know. So finally, to solve for-- well, we only have two variables left, T3 and theta. So this here-- this is actually called the Q equation in its most complete form-- describes the relationship between the kinetic energy of the outgoing particle and the angle at which it comes off. How do we solve this? How do we get one in terms of the other? Anyone recognize what kind of equation we have here? It's a little obscure. Well, it's not obscure. But it's a little bit hiding. But it should be a very familiar one. Think back to high school again. Yes. AUDIENCE: Is it the cosine angle for the triangle [INAUDIBLE] PROFESSOR: Let's see. Certainly, there's probably some trig involved in here, in terms of, yeah, if you know the cosine, then you know, let's say, the x or the y component of the momentum. But there's something simpler, something that doesn't require trigonometry. Yep. AUDIENCE: Is it quadratic? PROFESSOR: It is. It's a quadratic-- so who saw that? It's actually a quadratic equation, where the variable is the square root of T3. That's the trick here, is you have something without T3, you have something with square root T3, and you have something with T3, better known as root T3 squared. And there. So this is actually a quadratic equation. Despite the fact that it may not have looked that way in the first place, there we go. So now, someone who remembers from high school, tell me, what are the roots of a quadratic equation? Let's say if we have the form y equals ax squared plus bx plus c, what does x equal? Just call it out. AUDIENCE: Negative b-- PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] square root-- PROFESSOR: Yep. AUDIENCE: --b squared minus 4ac-- PROFESSOR: Over-- AUDIENCE: 2a. PROFESSOR: 2a. And in this case, a is that stuff. b is that stuff without the T3. And c is that stuff. Because we have, like, 15 minutes before I want to open it up to questions and I don't think we have to repeat the quadratic formula stuff, I will skip ahead. Skip ahead. This is when I'd normally say it's an exercise to the reader. But no. It's not the phrase I like to use. It's boring. And I can just tell you guys what it ends up as. It ends up with root T3 equals-- and this is the one time I am going to define new symbols because it's just easier to parse-- ends up being, we'll call it s, plus or minus root s squared plus t, where s-- let's see if I can remember this without looking it up. No. I have to look at my notes. I don't want to get it wrong and have you all write it down incorrectly because of me. There we go. The remaining stuff in the square root, E1 cosine theta over M3 plus M4. And t equals M4 Q-- is it a minus? It is a plus. Over M3 plus M4. So these are the roots of this equation. This is how you can actually relate the kinetic energy of the outgoing particle directly to the angle. So I want to let that sink in just for a minute, stop here, and check to see if there's any questions on the derivation before we start to use it to do something a little more concrete. Yep. AUDIENCE: Where did the E come from? PROFESSOR: The E. Oh, I'm sorry. That's a T. Thank you. Kinetic energy. Again, we should be consistent with symbols. And I think-- I don't see any other hanging Es. Good. Thank you. So any other questions on the derivation as we've done it? We managed to do it in less than four boards. There we go. OK. Since I don't see any questions, let's get into a couple of the implications of this. So let's now look at what defines an exothermic reaction where we say if Q is greater than 0-- which is to say that some of the mass becomes kinetic energy-- if an exothermic reaction is energetically possible, then what is the minimum T1? Ah. That's why I brought it. What's the minimum T1 up here to make that exothermic reaction happen? We'll put a condition on T1. So if the reaction's exothermic, which means it will happen spontaneously, how much extra kinetic energy do you have to give to the system to make the reaction happen? Let's think of it in the chemical sense. If you have an exothermic chemical reaction, is it spontaneous or is it not? It is spontaneous. Same thing in the nuclear world. If you have an exothermic nuclear reaction, do you need any kinetic energy to start with to make it happen? No. OK. There we go. So that's kind of the analogy. So T1 has to be greater than or equal to 0. It's pretty much not a condition, right? It happens all the time. So if we were to say T1 were to equal 0-- let me get my crossing out color again. If T1 were to equal 0, then s could equals 0. And T1 is 0 here. And then you just get-- that's an s-- t equals M4 Q over M3 plus M4. And this just kind of gives you a relation between the relative kinetic energies of the two particles. Another way of writing this relation would just be that E3 plus E4 has to be greater than or equal to E1. AUDIENCE: T? PROFESSOR: All this-- hmm? AUDIENCE: T? PROFESSOR: Ah. Thank you. Because Es will be used in a different point of this class. So we'll stick with T for kinetic energy. Thank you. So all that this condition says is that if mass has been converted to energy, then that kinetic energy at the end has to be greater than at the beginning. And that's all it is. So it makes this equation quite a lot easier to solve for an exothermic reaction. You can also start to look to say, well, what happens as we vary this angle theta? What does the kinetic energy do? Let's take the case of an endothermic reaction. Now we are running out of space. For an endothermic reaction where Q is less than 0, you would have to have T1 to be greater than 0. Otherwise the reaction can't occur. So you have to impart additional energy into the system to get it going. And it also means that not every angle of emission is possible. You might wonder, why do we care about the angle, because the reaction still happens anyway? Well, it doesn't happen at every angle. And reactions have different probabilities of occurring depending on the angle at which the things come out. So you could see here that as you vary T1 and as you vary cosine theta, you still have to make sure that this quantity on the inside here-- so, s squared plus t-- always has to be greater than or equal to zero or else the roots of this are imaginary and you don't have a solution. So it's kind of nice that this came out quadratic. Because it lets you take some of the knowledge you already know and now apply it to say, when or when are nuclear reactions not or are they allowed? Wait. Let me rephrase that. When are nuclear reactions allowed or not allowed? You can now tell, depending on the angle of emission and the incoming energy and the masses, which are all things that you would tend to know. So is everyone clear on the implications here? If not, let me know. Because that's what this class is for. AUDIENCE: Yeah. Can you just go over it one more time? PROFESSOR: Yes. So, for exothermic reactions where Q is greater than 0, all that says from our initial part of the Q equation, if Q is greater than 0, then we have this thing right here, where the final kinetic energies have to be larger than the initial one. Which is to say that some mass has turned into extra kinetic energy. And the solution to these is pretty easy because you don't need any kinetic energy to make an exothermic reaction happen. So you can just set T1 equal to 0, which makes s equal to 0, because they're all multiplied here. And then it simplifies lowercase t as just a ratio of those masses times the Q equation, which will tell you pretty much how much kinetic energy is going to be sent off to particle 3 right here. Up there. Particle 3. Because then we have this condition, if root T3 equals s plus square root of s squared plus t, and we've decided that s equals 0, that just means that T3 equals lowercase t, which equals that. So then you've uniquely defined the kinetic energy for an exothermic reaction, as long as you have no incoming kinetic energy. For the case of an endothermic reaction, first of all, we know that the incoming kinetic energy has to be greater than 0. It's like the excess energy that you need to get a chemical reaction going. Has anyone here ever played with-- what's the one, a striking one here? Well, has anyone ever lit anything on-- no, that's-- yeah. Of course you have. And that's not a good explanation. Hmm. What's a good, striking endothermic chemical reaction? Can anyone think of one? Yeah? AUDIENCE: When you put tin foil in Liquid-Plumr and it releases-- PROFESSOR: And it's a hydrogen generator? AUDIENCE: Let's see. I guess that's an explosion. PROFESSOR: I think that happen-- yeah. That's more like an explosion. That's, like, the intuitive definition of exothermic. Yeah. Actually, there's a fun one you can do, too. This is great that it's on video. You do that plus put manganese dioxide in hydrogen peroxide and you have an oxygen generator. And then you have the purest, beyond glacially pure, spring water. You just mix H and O directly. Just don't get near it. Because it tends to be pretty loud. We do this for our RTC or reactor technology course, where I've got to teach a bunch of CEOs enough basic high school chemistry so they can understand reactor water chemistry. And the way I make sure that they're paying attention is with a tremendous explosion. So folks come here, pay about $25 grand apiece for me to fire water-powered bottle rockets at them. It's a pretty sweet job. So if you guys are interested in academia, you know, these things happen in life. It's pretty cool. Yeah. All right. Since I can't think of any endothermic chemical reactions off the top of my head, I'll have to keep it general and abstract and say, if you have an endothermic reaction, you have to add energy in the form of heat to get the reaction going. In an endothermic nuclear reaction, heating up the material does not impart very much kinetic energy. You might raise it from a fraction of an electron volt to maybe a couple of electron volts if things are so hot that they're glowing in the ultraviolet. That doesn't cut it for nuclear. So you have to impart kinetic energy to the incoming particle such that the kinetic energy plus the rest masses is enough to create the rest masses of the final particles. And that's the general explanation I'd give. I forget who had asked the question. But does that help explain it a bit? AUDIENCE: [INAUDIBLE] PROFESSOR: Cool. OK. I'll take five minutes. And let's do a severely reduced case of this, the case of elastic neutron scattering. It's kind of a flash forward to what we'll be doing in the next month or so. Does everyone have what's behind this board here? I know that was, like, three boards ago. So I hope so. So let's take the case of elastic neutron scattering. Remember I told you that after we developed this highly general solution to the Q equation, everything else that we're going to study is just a reduction of that. And this is about as reduced as it gets. So in elastic neutron scattering, we can say that M1-- well, what's the mass of a neutron in AMU? And let's forgive our six decimal points' precision for now. What's it about? AUDIENCE: 1. PROFESSOR: 1. So we can say that M1 equals 1. And in the case of elastic scattering, the particles bounce into each other and leave with their original identities. So that also equals M3. If we're shooting neutrons at an arbitrary nucleus, what's M2? Yep? AUDIENCE: A? PROFESSOR: Just A, the mass number. Same as M4. Now, we don't have M2 in this equation. Whatever. But the point is, yeah. We're going to use these two. We're going to use these two. So let's substitute that in. Oh, and one last other thing I mentioned. What is the Q value for elastic scattering? AUDIENCE: [INAUDIBLE] PROFESSOR: Right. 0. Because the Q value is the difference in the rest masses of the ingoing and outgoing particles. If the ingoing and outgoing particles are the same, M1 equals M3, M2 equals M4, that sum equals 0. Therefore, Q equals 0. So let's use these three things right here and rewrite the general Q equation in those terms. Which board is it on? Right there. So let's copy that down. So let's say we have T1 times M1 is 1 over M4 is A minus 1 minus 2 over M4 is A. This is where it gets nice and easy. M1 and M3 are just 1. So 1 times 1 times T1. We don't know what that is yet. So let's call it the Tn, T of the neutron coming in. How about this? We'll call it T in and T out for ease of understanding. Cosine theta. What do we have left? Plus T out. And let's make T1 into an in right there. Times M3 over M4. M3 was 1. M4 is A. Plus 1 equals Q, equals 0. This is quite a simpler equation to solve. So let's group this all together. There's a couple of tricks that I'm going to apply right now to make sure that everything has A in the denominator to make stuff easier. We can call 1 A over A here. We can call 1 A over A there. That lets us combine our denominators and stick the sine right there. That becomes an A. Same thing here. I'll just connect the dashes and stick the minus sign there, leaving an A right there. Now we can just multiply everything by A, both sides of the equation. So the As go away there. We have a much simpler equation. 0 equals T in of-- let's see-- 1 minus A over 1. OK. We'll just call it 1 minus A. Minus 2 root T in T out cosine theta plus T out A plus 1. And, OK, it's 10 minutes of, or it's five minutes of five minutes of. So I'm going to stop this right here at a fairly simple equation. We'll pick it up on Thursday. And I want to open the last five minutes to any questions you guys may have. Since that request came in on the anonymous rant forum, which hopefully you all know now exists. Yep. AUDIENCE: So what exactly is forward scattering? I didn't really get that before. PROFESSOR: So let's look at elastic scattering as an example. So in elastic scattering, two particles bounce off each other like billiard balls. In forward elastic scattering, the neutron, after interacting somehow with particle 2, keeps moving forward unscathed. So in the elastic scattering sense, forward scattering is also known as missing. AUDIENCE: [INAUDIBLE] PROFESSOR: You can have other reactions, let's say, where you have a particle at rest, another particle slams into it, and the whole center of mass moves together. I don't know if you'd call that forward scattering as much as, let's say, capture or fusion or something. But in this case, scattering means that two particles go in, two particles leave. Whether it's elastically, which means with no transfer of energy into rest mass, or inelastically, where, let's say, a neutron is absorbed and then re-emitted from a different energy level. And that's something we'll get into in, like, a month. So you can have forward elastic or inelastic scattering. In this case, I'm talking about elastic scattering, which is the simple case of, like, the billiard balls miss each other. Which is technically a case that can be treated by this. Because all you have to do is plug in theta equals 0 and you have the case for how much energy do you think the neutron would lose if it misses particle 2. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah. It wouldn't lose any energy. Right? It would have the same energy. So that's the case for forward scattering. A neutron, when it interacts somehow with another particle, can lose as little as none of its energy. If it misses, no one said it had to lose any energy. And by solving this equation here, which we'll do on Thursday, we'll see what the maximum amount of energy that neutron can lose is, which is the basis for neutrons slowing down or moderation in reactors. Yeah. AUDIENCE: Are T in and T out equal there, in which case that equation is used to solve for theta? PROFESSOR: T in and T out are not always equal. But in the case of forward elastic scattering, they would be. Because the neutron comes in with energy T in and it leaves with energy T in. For any other case in which the neutron comes off of particle 2 at a different angle, it will have bounced off of particle 2, moving particle 2 at some other angle phi, and giving it some of its energy elastically. The total amount of that kinetic energy will be conserved. So let's say-- what did we call it? What is it? Yeah. So T1 would have to be the same as T3 and T4 together for this Q equation where Q equals 0 to be satisfied. So what you said can happen. But it's only the case for forward scattering. Any other questions? Yep. AUDIENCE: In the case of an exothermic reaction, we assume that T1 equals 0. Can you re-explain why we made that assumption? PROFESSOR: So the question was, in an exothermic reaction, why did we say T1 equals 0? It's not always the case. But it provides the simplest case for us to analyze. So an exothermic reaction can happen when T1 equals 0. It can also happen when T1 is greater than 0. So we're not putting any restrictions on that. But in the case that T1 equals 0, s is destroyed and the harder part of T is destroyed, making the solution to this equation very simple and intuitive. Which is to say that if you just have two particles that are kind of at rest and they just merge and fire off two different pieces in opposite directions, their energies are proportional to the ratio of their single mass to the total mass. So that's like a center of mass problem. You'll notice also I'm not using center of mass coordinates. Center of mass coor-- who here has used those in 801 or 802? And who here enjoyed the experience? Oh. Wow. No hands whatsoever. So center of mass coordinates and laboratory coordinates are different ways of expressing the same thing. Usually you can write simpler equations in center of mass coordinates. But for most people-- and I'm going to go with all of you, since none of you raised your hand-- it's not that intuitive. That's the same way for me. So that's why I've made a decision to show things in laboratory coordinates, so you have a fixed frame of reference and not a moving frame of reference of the center of mass of the two particles. But the center of mass idea does kind of make sense here. If you have two particles that are almost touching and then they touch and they break into pieces and fly off, the total amount of momentum of that center of mass was 0. And it has to remain 0. And so each of these particles will take a differing ratio of their masses away. We already looked at this for the case of alpha decay, where if you have one nucleus just sitting here-- let's say there was no T1. There was just some unstable T2 that was about to explode and then it did. Remember how we talked about how the Q value of an alpha reaction is not the same energy that you see the alpha decay at? Same thing right here. So this Q equation describes that same situation. Notice there's no hint of M1. There was really no M1 in the end. We don't care what the initial mass of the particle that made alpha decay is. All we care about is what are the mass ratios and energy ratios of the alpha particle and its recoil nucleus. So it all does tie together. That's the neat thing, is this universal Q equation can be used to describe almost everything we're going to talk about. So this is as complex as it gets. And from now on, we'll be looking at simpler reductions and specific cases of each one. So it's five of. I want to actually make sure to get you to your next class on time. And I'll see you guys on Thursday. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 35_Food_Irradiation_and_Its_Safety.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: All right, guys. Welcome to the last official day of 22.01. Can't believe we've actually made it here. And you guys have learned a ton about how ionizing radiation does and doesn't affect different types of matter. If you've noticed that the whole class has been kind of taking a slant towards looking at issues in the public sphere from things like hormesis to dose, to risk, to power plants, today we're going to talk about food irradiation. One of the main reasons that we have all sorts of incredibly safe food today thanks to ionizing radiation, a lot of the myths and misconceptions and science behind what does and what doesn't happen when you irradiate food. And you guys are fully equipped to determine not just is food irradiation safe, but how should it be conducted? How should it not be conducted and what would the effects really be? And with stuff that we did the last few classes, actually looking up research papers and discerning which sources are real and which ones are, let's just say, not, for lack of a bunch of four letter words, you can tell for yourselves what sources are actually worth looking at. So just quickly go over the basics of food irradiation and maybe five minutes before we hit the primary sources. So the general idea here for anyone that got into the reading, which I'll pull up right here, is that we irradiate food to do a whole bunch of different things. Can anyone tell me, what are some of the reasons one might irradiate food? AUDIENCE: Bacteria like E. coli MICHAEL SHORT: Yeah, gets rid of bacteria, other harmful organisms. It just kills other organisms. Like what? Like bacteria. Anyone else? AUDIENCE: Bugs. MICHAEL SHORT: Yes, insects. It's actually used to either kill or sterilize insects so that they don't breathe and let's say-- oh yeah, totally, because the dose required to sterilize something is usually a lot lower than the dose required to kill something. Why do you guys think that is? Yeah? AUDIENCE: To sterilize [INAUDIBLE] all you have to do is kill off your reproductive cells. MICHAEL SHORT: That's right. And anyone remember what those quality factors look like for those reproductive cells? AUDIENCE: Much higher. MICHAEL SHORT: That's right. Let me bring it up. Let me go into the dose dosimetry and background. I think it'll help to actually see the numbers here. Let's go to the tissue quality factors. There we go. Right at the top of the list, reproductive organs. So it takes a whole lot less dose to sterilize something like insects or other bacteria or things, or I don't know if you can say sterilize viruses. It's still kind of under debate whether viruses are technically alive. They're right on that edge of what people would consider alive or not. And there's still a debate going on about what does alive really mean. Yeah. You can sterilize the insect population. For example, who here's bought a bag of rice before, and who here has found some little rice bugs in that bag of rice before? Are you serious? I must shop at cheap store, I find them all the time. But the nice thing is if you irradiate them then they won't breed and continue to eat and breed in the bag and then you open the bag and you've got a swarm of, I don't know what those things are, rice weevils or something. I think I remember looking them up once because I was like what is this thing in the rice? Yeah, it's gross. But that's why we do food irradiation. Now, what else? What other reasons might want to irradiate food other than to kill or sterilize things in it? Yeah? AUDIENCE: Doesn't it preserve like shelf lifes? MICHAEL SHORT: It does preserve shelf life, especially of things like fresh fruits, vegetables, seeds, legumes. Anyone have any idea why? Yeah? AUDIENCE: Well, it seemed like that would be just because it kills things that eat it. MICHAEL SHORT: Yep. One would be it kills things that eat it, but the other thing is those plants have reproductive cells as well. Let's pick a good example. Anyone ever had really well sprouted bean sprouts before? Like not the fresh ones, but the really old ones? Not old like they've sat behind the fridge or something. OK. AUDIENCE: Like super sprouted? MICHAEL SHORT: Yeah, super crazy sprouted. AUDIENCE: The super crazy sprouted. MICHAEL SHORT: Anyone remember what they taste like? They are kind of bitter and rather unpleasant. So one of the main reasons for irradiating food is to prevent sprouting and prevent germination because as soon as the plant sprouts and starts to say like, all right, let's grow, starts to consume its store of sugars, water, nutrients and everything else and start generating other flavors, which tend to be fairly off flavors. So another way that you can keep things tasting good, even if they don't have bugs or weevils or insects or viruses in them, is prevent them from changing at all through their normal metabolic processes. So it kind of freezes the food in place. Well, since I mentioned freezing, there's another little caveat to that. At what temperature do you think would be the best temperature to irradiate food? AUDIENCE: Cold temperatures. MICHAEL SHORT: You say cold, and why do you say so? AUDIENCE: When you did that one thing, the temperature'd grow. MICHAEL SHORT: Yeah, let's bring it up. We talked about G-values and temperature, and I think it's a little further in. It's in the chemical effects. So I'll jog your guy's memory a bit from just a week ago where we were talking about G-values or the number of different radiolytic byproducts of radiolysis byproducts that are formed as a function of temperature. I'm not going to full screen that because that's going to get all freaky. But you can see that there's a whole lot more of them formed as a function of temperature. What's not shown here is what happens when this reaches 0 degrees Celsius? What happens to water at 0 degrees Celsius? Just look outside, what do we got? It freezes, OK. All these G-values are calculated assuming the free species are diffusing in liquid water. When you freeze the food, you pretty much shut down diffusion. When you shut down diffusion, do these chemicals react or not in a different way? Can they move to find each other? No, they're frozen in place. You're stuck at the diffusion coefficient in ice, which is a whole lot slower than in free flowing water. So in this way, you can kill whatever organisms are there even if they can survive freezing by directly damaging their DNA, without altering as many of the foods normal tastes, flavors, colors, whatever. And speaking of taste flavors colors, I want to bring up some of the pseudoscience. It didn't take long to find a couple of things on the internet talking about how we shouldn't eat irradiated food. And to that I say, well next time you spend five weeks straight on the toilet you may be thankful for food irradiation. So let's look to see what sort of things could or couldn't happen with food irradiation based on random internet article. Again, not a useful source, but let's see what they say. Damages food by breaking up molecules and creating free radicals. That's true. What's the question we should be asking though is not does it damage molecules? But what do you think? Yeah? AUDIENCE: Do the damaged molecules have any effect? MICHAEL SHORT: Yeah. So one, do the damaged molecules have any effect? Were those same damaged molecules there to begin with? And the other question I want to ask is how much? How much is always the question you want to come back with when somebody says isn't it true that radiation does binary effect? Causes cancer, kills babies, whatever you want? Sure, enough. But the question is how much? So there's a second source top 10 reasons for opposing food irradiation. Let's go through these a little bit one by one and look at some of the references, which they're not all bad. So let's see. In legalizing food irradiation we did not determine a level which food can be exposed and still be safe for human consumption. So let's think about what that actually means. We never hit that LD 50 or LD 10 or whatever dose we found that will cause ill effects in humans. I would argue this is a good thing. If we haven't had any documented cases of people getting sick from food irradiation and we already get the effects that we want like stopping germination, sprouting, killing bad things, then great, let's not go higher. Mission accomplished. Let's look at the references again. Reference one, they actually give a US code of federal regulations. Reference two, I dare you to find what they were talking about. Various filings over a period of 17 years. AUDIENCE: Isn't that dating publication Federal Register? MICHAEL SHORT: I'm not sure. I think it might be something more legit, but there may also be a publication called the Federal Register. I doubt the Federal Register publication has what's called filings, but these would be more official filings. But yeah, they're somewhere in the archives of somewhere. But at any rate-- AUDIENCE: [INAUDIBLE] from class. MICHAEL SHORT: Again, I would argue that this is actually a good thing. We don't know to what level food irradiation can harm people. Now, let's think about some of the ways in which if done incorrectly it could harm people. Without me telling you ahead of time-- and this we'll see who did a little bit of the reading. --what sort of types of radiation do we use to irradiate food, and why? We have our choice of alphas, betas, positrons, gammas, neutrons, heavy ions, et cetera. What would you use, and why? Yeah? AUDIENCE: Presumably ionizing so you don't have to activate any of the things [INAUDIBLE].. MICHAEL SHORT: So you said presumably ionizing. AUDIENCE: Yeah, so you don't activate them and you get radioactive thiamine. MICHAEL SHORT: OK. Which of these types of radiation are ionizing? AUDIENCE: Sulphur, [INAUDIBLE],, gamma. MICHAEL SHORT: What do you mean by ionizing? AUDIENCE: So they can lead to ionizations in the material. MICHAEL SHORT: Oh. I would argue that every one of these types of radiation can ionize. As we've seen, they can all knock out electrons or other nuclei. But you're getting at an important point. Can anyone help Alex clarify? because you're getting onto the right track. Anyone? Yeah? AUDIENCE: [INAUDIBLE]. MICHAEL SHORT: OK. So you may not want to use heavy ions or alphas because you don't want to plant anything in the material. Are you worried about implanting helium? What about iron? I mean let's say you want to use carbon ions and the food is Carboniferous. That's OK. But what about some of these types of radiation are quite different from the others when we were talking about everything from gamma to charged particle to neutron interactions? What sort of things might we look at as criteria to determine whether or not we want to use this for food irradiation? Stopping power. So stopping power, which is related directly to range. So let's pick an energy of around 1 MeV. How do you find the range of 1 MeV alphas in food? Great. That's what I was going to say too. You can either do the calculations, or like most of us actually do, we just run a quick simulation that integrates all those calculations. So let's do that right now. So we will take one MeV helium or 1,000k EV helium. We will approximate food as water with a stoichiometry of 2 to 1, a density of 1 gram per cubic centimeter, and let's allow for-- I think I already know what the answer is going to be. --10 microns. What are we getting for a range? 5, 6 microns for alpha particles in water. How well would alpha particles sterilize food? Not. So let's say you have a chicken with diameter around 40 centimeters. The alpha particles would penetrate the chicken to a depth of approximately 6 microns. So alpha particles are probably out. What else is out based on range alone? Heavy ions. How far do you think ion ions would penetrate into a chicken? Not far at all. Let's find out. Iron ions, I'm going to shrink the scale to 1 micron. We actually kind of did this on the homework, didn't we? I had you guys look at the range of lighter and heavier ions. What I'm hoping is that you'll have some sort of an intuition for how far ions tend to go in material. So as you go heavier, does the range go up or down? Range goes down. As we expect we're getting just about 2 microns of chicken with heavy iron ions. So what could you do to get those to penetrate farther? Yeah, so you could boost the energy. So if you were to boost the energy of the alphas-- let's see. First of all, with our typical range relation where a range is roughly proportional to kinetic energy squared, how much more energy would we have to impart to the alphas in order to get them to the center of the chicken? Well let's try and work it out. The range was about 6 microns and an energy of 1 MeV and we want to get to about 20 centimeters. Let's call that 6 centimeters. We're just going to use this approximation for now. And if we want to go from 6 microns to 6 centimeters, that is four orders of magnitude. So how many orders of magnitude would we need to increase the energy of the alphas to get through? OK. So we'd have to go to, let's say, 100 MeV you said. Let's find out. We can just check this. So we'll switch back to alphas at 100 MeV. This is already looking to be fairly uneconomical. But this is physics, so let's find out. OK. So we're at about 1.5 centimeters. And that's again because at high energies this range relation changes to about proportional to t for really high energies. So we know we're going to have to get to around a GeV alphas in order to food irradiate chicken. That sounds really expensive. Yeah. The proton cyclotron and MGH is about 250 MeV, and we're talking an eight or nine figure installation. Not worth it to irradiate food. What else might go wrong with GeV alpha particles? What may you induce? Yeah? AUDIENCE: Bremsstrahlung radiation. MICHAEL SHORT: Lot of Bremsstrahlung radiation. We pretty much ignored Bremsstrahlung of things like alpha particles and protons. But once you get into the GeV or tera electron volt range, you are going to get a lot of Bremsstrahlung and shielding those x-rays is going to be a mess. OK. So let's say we've excluded alphas and ions for the purposes of physics and we are left with batas, positrons, gamma and neutrons. What next? AUDIENCE: Neutrons are also expensive. MICHAEL SHORT: Neutrons are expensive. You either need a reactor, a spallation source, or a little pulsed fusion device, two of which we have down the street. Let's say you had a cheap source of neutrons. One of those, if you build it and there's money to be made then they will-- no. If there's money to be made they will build it, and if you build it, they will come with their chickens for irradiation. Why else physically would you not want to use neutrons to irradiate food? AUDIENCE: They damage everything. MICHAEL SHORT: They damage everything. That's OK, right? If they're damaging living organisms more than the food itself, that's OK. But what do neutrons tend to do when they interact with matter? And that's as specific as I can make the question so far. The correct answer is everything. Like what? Chris? AUDIENCE: Well, like Alex said, they could activate-- [AUDIO OUT] MICHAEL SHORT: Yes. So that's when I said you're getting on the right track. So thanks you guys for doing a team tag answer. Yes, they can activate things at pretty much any energy. For this I'm going to jump to Janis. Like I said, I like to do things live, so show you guys that we could do it on the fly and respond to your comments in real time. It is showing, good. OK. Though this is a skill I want to make sure that by the end of this class, which is about 35 minutes from now, you guys will know to jump to Janis and start looking for the right cross-sections because you're going to need it for the rest of your life career time at MIT. Pick probably two out of the three if you guys are going to be nuclear somethings. I'm definitely in a different database right now. I was in incident proton data for other things that will become apparent soon. Let's go back to incident neutron data. We'll use the same database that we've been using all the time, the ENDF most recent database. Look at the cross-sections and let's assume you had some iron in your food. You are irradiating red meat. Nickel, copper, iron. Pretty common isotope of iron could be iron 58. Let's now go down and find the z gamma the neutron capture cross-section for iron 58. It's decidedly non-zero at all energies, which means you're going to make iron 59. What happens when you make iron 59? I don't expect you to know, but where do we go to get the answer? Wall, yep, Tyree, that wall, the table of nuclides, whatever you want, or the Brookhaven table, whatever you want. The point is the table of nuclides. Let's take a look at iron 59 and see what it does. It is not stable. It has a half life of greater than 10 days. It's a beta emitter and it beta decays to cobalt 59, which is a stable isotope. So you would be creating an unjustable source of beta particles. Nasty stuff. Let's see what those beta particles would do. They have about a 1.5 MeV energy. Let's look at the decay diagram. There are a few transitions, the most likely of which would give a beta and a gamma, the most two likely of which would give a beta and a gamma. So you'd be ingesting a dual beta gamma emitter. Now, to find out how far would those betas go, as in are they energetic enough to escape you or not, what do you use? AUDIENCE: [INAUDIBLE] MICHAEL SHORT: [INAUDIBLE] electrons. AUDIENCE: Calculations. MICHAEL SHORT: Back to calculations. Or I want to show you guys there is a database for everything and it's always on NIST. I wonder if any of you guys have found these for the homework because I didn't use them in the solutions, but they're quite useful. NIST has a database called ESTAR, stopping powers for electrons where you can simply say graph the total stopping power or range for electrons in elements or materials. Materials, let's call food as soft tissue. They don't have that, let's go to water. And it gives you the range of electrons in grams per square centimeter in water. What do we have to do to make this into an actual range in centimeters? AUDIENCE: You divide by density. MICHAEL SHORT: Multiply by density. Yeah, I'm sorry, divide in this case. So you have the range in grams per square centimeter. If you divide by density you're left with centimeters on the top. The density of liquid water is about 1. So in the case of water, you can simply read off this table if you're talking water at 25 C. So a beta particle with an energy of about 1.5 MeV has a range of about a centimeter in material. Not something you'd want to ingest because around 1 centimeter away from wherever the betas end up they would do a whole lot of damage at the end of their range where they have the highest stopping power. So we for many, many reasons, least of which is activation, neutrons are out. So we do not irradiate food with neutrons because they will induce radiotoxicity. So for anyone that tells you, oh, don't you stick food in a reactor to irradiate it, the answer is definitely not. You don't want neutrons nearby. So we're left with different types of betas and gammas. From our study right here we just found out that betas penetrate about 1 centimeter into materials at 1.5 MeV. So how energetic do we have to get them to get to the center of the chicken? If range is proportional to t squared and our range at 1 MeV or 1.5 MeV is about 1 centimeter-- if we want to go to 20 centimeters, about how much more energetic do we have to get the betas to irradiate the whole chicken? AUDIENCE: At least 10 times [INAUDIBLE].. Probably. MICHAEL SHORT: Let's go with 10 times. Let's go to 15 MeV, and we can just read off this chart. At 15 MeV we're getting on about-- now keep in mind, this appears to be a double log scale, so that right there is 10, that right there is 100. Yeah, we're getting towards 10 centimeters. If we want to see where it is at 20, the log marker line is kind of missing, but it comes to about here. If we go down, we're getting around 30 or 40 MeV. How reasonable does that sound for linear accelerator? Fairly. It's not unreasonable. It would still be big. Yeah, it may not be efficient, so we may not use betas to irradiate entire chickens. What if you're irradiating strawberries that are roughly about a centimeter in diameter? How does a 1 MeV linac sound? We've got some of those. Yeah. We've got a 2.5 MeV linac just down the street in building N40. So that's totally OK. You could use betas in order to irradiate thin enough foods, food for which you can reach the center with the range of the betas. Do you induce any significant radiotoxicity? How do you find out? AUDIENCE: Experiments. MICHAEL SHORT: You could run an experiment. Or what database would you jump to to try to figure this out? AUDIENCE: Janis. MICHAEL SHORT: I would jump to Janis. So let's go back there. Let's see if they have the incident electron data. Yes, incident electron data, the EXFOR database. It's good for charged particles. Let's look at iron 56. Let's say you're irradiating red meat. I think this is just total cross-section, so that's not going to tell us much. It's also not loading and had an error, and another error. Interesting. All right. Let's see if we have any other interesting isotopes that have any good reactions. Guess not. All right. Well at any rate, it's awfully difficult to induce radiotoxicity with betas. There's another one I didn't get into, which would be protons, somewhere between alphas and betas in terms of range and such. Does anyone know, perhaps anyone that works in the vault, what happens when you give very high energy protons? AUDIENCE: A lot of gammas and neutrons. MICHAEL SHORT: Indeed. A lot of gammas and neutrons. The gammas, maybe no problem at the irradiation facility; the neutrons, big problem. So find it again. We head to Janis, look at the incident proton data. I've already scoped this out and I know it's on the EXFOR database of cross-sections. Let's go right back up to our iron 56. So assume we're irradiating red meat. What sort of reactions do we have? Proton, n, something with a few points to it. A-ha, a non-zero cross-section. So typically in the range of five to 10 MeV, high energy protons will start to create neutrons where a proton will come in, a neutron will come out. So you would absolutely not want to use protons above, let's say a safe limit, of around 3 MeV. Then the problem is what's the range of 3 MeV protons and stuff? Anyone have a intuition for that? Pretty small. If you don't know generally the order of magnitude, we'll head back to here to TRIM. We'll use 3 MeV protons. It's looking about a millimeter of material. We're getting about 200 microns in. Not useful enough to be useful for any sort of food irradiation. So protons again are out for reasons of both neutron activation and range. AUDIENCE: When people [INAUDIBLE],, it's like I sit next to it all the time. MICHAEL SHORT: Sure. But what energy are the protons that you're making? AUDIENCE: Well, I mean we did a lot of low energy, but went down to like 10 MeV. MICHAEL SHORT: I highly doubt you were sitting there during 10 MeV. AUDIENCE: I think the highest we've gotten was probably 2 or 3. MICHAEL SHORT: Yeah. So 2 to 3 MeV, those proton cross-sections jumped to 0. In fact, let me show you that for a better-- kill the TRIM thing because that'll fry the computer. Let's look at, I don't know, carbon, where there's a whole lot more data, natural carbon. There should be lots of nice cross-sections here. So in goes proton-- maybe not for natural. Carbon 12 has over 100 different reactions. Something, n. Well, there's a lot of reactions to sort through. Well, I don't want to waste your time with that now. But anyway, hopefully the cross-sections I showed you show you that around 5 MeV or so you do start to make neutrons. So if you can get up to 10 MeV, that's why the vaults got that forefoot thick door. You don't want to be in the room when that happens. AUDIENCE: When we do a lot of protons in class is not shielded. MICHAEL SHORT: Yes. But the class accelerator only goes up to about 2 MeV. Yeah. And that's perfectly safe to be standing around there for. AUDIENCE: We have downstairs we can't next week because interlock. MICHAEL SHORT: Yep. Because of interlock, but not because of physics, right? AUDIENCE: I'm in both. MICHAEL SHORT: OK, fair enough. Cool. So now we're left with gammas. Gammas work pretty well. They have very, very long ranges. Even the concept of range of a gamma is kind of a funny thing to say because they undergo exponential attenuation. So you want either a high enough flux of gammas that their low attenuation won't matter or you use a lower energy gamma ray. So cobalt 60 irradiators that give off those 1 and 1/2 MeV gammas are quite commonly used for this sort of thing because the 1 and 1/2 MeV gammas will get through just about everything. You just need a whole lot of them. Has anyone here seen or heard of the cobalt irradiator downstairs in the basement building six? We got one of these things actually. It's just a sealed source of cobalt. And the way you start irradiation is you simply open the door and the irradiation stops being shielded enough. Now, what things could go wrong with super high energy gammas? AUDIENCE: Ironizations. MICHAEL SHORT: Ironizations is what you want, right? You want to ionize the water and the DNA of the bacteria or the germinating cells so that they get destroyed. What else could you induce with super high energy gammas? AUDIENCE: [INAUDIBLE] electrons. MICHAEL SHORT: That's another form of ionization, right? So that's a good one. Why don't I show you a quick thing? Let's go to incident gamma data. I think the EXFOR database will be pretty good. Let's go back up to iron again. We'll go with our red meat example. Gamma, neutron. Anyone heard of gamma induced neutron ejection? High enough energy gamma rays will actually cause neutrons to be emitted. So gammas are a definite yes from the point of view of physics. They're also going to have to be less than about 5 MeV because once you get to around 5 MeV, or in the case of iron 56 10 MeV, you end up with a non-zero. And actually fairly significant like 0.1 barn cross-section for a gamma goes in, a neutron comes out, and that neutron comes out at whatever energy. It activates what's nearby and makes all sorts of nasty isotopes that decay the way they will. Do you guys remember too from the neutrons discussion about photofission, the idea that a gamma ray can induce fission of a heavy nucleus? So if there's any traces of uranium in the food, which there always are, whatever, what you don't want it to do is then make a whole bunch of fission products because even a couple parts per million of uranium which might be no big deal could be a big deal if you turn it into fission products instead of plain old uranium where it's just a heavy metal, like lead, whatever. We can deal with a tiny bit of lead in our food. Totally. Yeah, OK. So while looking through these internet studies talking about-- I don't like this kind of argument number four, irradiation encourages filthy conditions. I don't think that's a fault of the irradiation. I think that's a fault of the people who are like, oh cool, physics means that we can relax our standards. These are separate arguments. And going through all these things that, frankly either are false, are true, but the question is how much, and are true to such a small degree that it barely matters. And wading through a lot of these things, let's say, PhD thesis, articles in the ecologist, a lot of other FDA papers, various filings, from the '60s came upon actually a really useful document, this World Health Organization study on the wholesomeness of food irradiated with doses above 10 kilogray. It's modern. It was done by a peer reviewed study group. It was commissioned for a major organization, so I pulled it up. Yeah. It's like, OK, there's a legit reference in here. I recommend you guys look through this. It's quite fascinating how many studies have been done on rather higher irradiation doses. So you can usually get away with stopping germination at around a kilogray or so, rather low dose of radiation. The highest dose anyone would use is about 50 kilogray. They specifically looked at the highest order of magnitude dose that we use for food irradiation and looked not just to say are there ill effects, but what are the ill effects? What are the other compounds that are made, and do they matter in the end? And I want to walk you guys through a few of the bookmarks that I found pretty fascinating. I asked you guys about G-values at low and high temperature. Here's a great plot showing that right there. This is food irradiation done at 20 and minus 40 Celsius to look at the amount of ammonia, nitrates, ferrocyanides, things that you might not want, and look at the difference between irradiating at 20 C and minus 40 C. Enormous. Why is that so, physically? AUDIENCE: Diffusion stops. MICHAEL SHORT: Exactly. Diffusion stops when you cease to have a liquid. It doesn't stop completely, but the diffusion constant of anything and a solid it's going to be a whole lot slower than that same anything in a liquid. And so this way you can destroy the cells without all of the radiolytic byproducts going around and damaging other food molecules and other food cells. Then we got into the question of off smells and funny sorts of changes. So one of the comments said can change the flavor odor and texture of food, pork can turn red, beef can smell like a wet dog, vegetables could become mushy, et cetera, et cetera, et cetera. This review of studies actually looked at what compounds are formed in which foods, and by how much? The only thing they didn't tell you is how do they smell? So I looked up a few of those. They looked at hexanes as a function of fat content for a rather high dose for 10 kilogray, or at least here everything is in normalized per 10 kilogray. So this is the yield of this compound in nanograms per gram. Does that sound like a big deal? It may not sound like a big deal, but a lot of these odiferous compounds are detectable by the human nose in parts per billion. So they actually do matter. And I looked up to see what sort of smell do hexanes typically have? The word was petrolic, smelling like petrol or gasoline. Might not necessarily be something you want. And it is true that fat compounds when you break up these, let's say, fat molecules, which usually contain three fatty acids, those fatty acids themselves are very aromatic. You've heard that expression fats where the flavor is, right? A lot of it is not just due to, well, if you just eat butter, it's not that flavorful. Anyone ever tried? I'm glad I'm not the only one. OK, good. But where fat really comes into play is when you heat it and it undergoes all sorts of different chemical reactions with the food nearby, liberating some of the fatty acids. It's part of why lamb smells like lamb and nothing else, is fat's where the flavor is, and 90% of flavor is smell. You've only got five or six tastes I think that's still under the debate, but you can smell thousands of different compounds. And so they actually matter. So I started looking at some of the other ones. Heptadecadiene in micrograms per gram fat per 10 gray for foods containing a lot of linoleic acid. The smell, Carrion beetle sex pheromones. The sex pheromones shouldn't be what turns you off, it should be the Carrion beetle. What is Carrion? It's the beetles that feed on rotting meat. So this is the juice that rotting meat beetles secrete to attract other rotting meat beetles nearby, to use polite language. It's probably something that you don't want in your food, or is it? Does anybody know what makes pork smell like pork? Is what? AUDIENCE: Do I want to know? MICHAEL SHORT: You're going to find out. Who likes pork here? Awesome. I'm going to ruin your day. There's a compound called skatole. Can anyone figure out from the root of the word? Yeah. Anyone ever heard of that wonderful barnyardy smell from a cut of free range pork? Anyone heard this term before? Just raise your hands. Anyone heard the nice pork barnyardy smell? OK, a couple of people. That barnyardy smell is parts per trillion or parts per billion of skatole. So tiniest littlest bit of poop. Yeah. The same sort of compounds that you find in scat in incredibly small amounts contribute to the wonderful flavor of really good pork. So just because these compounds are made in higher amounts with higher amounts of fat or dose doesn't mean that they're necessarily off flavors. But it is kind of hilarious to see what other places do you tend to find high concentrations of these? The next one down, hexadecatriene from irradiation of muscle. The one paper I could find that talked about this in a cocktail of smell compounds comes from the odiferous defensive stink glands of red something beetles. Yeah, sounds horrible, right? So if you stopped there you might think, great, so radiation produces odiferous defensive stink gland compounds. But as we know, pork smells great. We don't necessarily know in what concentrations-- what is it? --hexadecatriene would smell good or smell bad to the human nose. It's just no telling. There's some compounds in any amount if you can detect them they're terrible. There's some compounds that go from bad to good. There's some pounds that go from good to bad. Anyone ever smell someone that's slopped on perfume before? Would you describe the smell as good? No. Perfume relies on lots of different compounds in very, very small concentrations. It's supposed to be subtle and enhance your own body chemistry. You're not supposed to smell like a perfume factory explosion. So another sort of real life analogy where too much of a good smelling thing can smell really bad. Yeah. And so then going on from the various odiferous compounds where a few other points I wanted to-- oh no, there's another one. OK. Seems like the production is pretty much linear with either dose or with the abundance of the precursor normalized dose. This one you tend to find in deodorants. What was the name of this compound? Propanediol type. Don't know the structure because I'm not an organic chemist, but things you might find in deodorant. Again, no telling whether or not this would be good or bad in food and how much. That's always the question I want you to remember. If somebody asks, isn't binary effect bad? Isn't meta tag binary effect bad? The question is depends how much. Yeah? AUDIENCE: Do these like graphs show how much more more there was afterwards or just like how much there is in general? MICHAEL SHORT: They do. It shows the dose normalized yield per 10 kilogray in micrograms per gram. So it shows that depending on how much of whatever precursor, in this case enzyme inactivated muscle there was, how much relative amount of this compound existed. What these graphs are telling us is they're pretty much all linear and they're pretty much all the same for-- I saw that as human for a second, holy crap. My heart just skipped a beat. --ham beef, chicken, pork. Oh man. Anyway, going into the conclusions, which again are strikingly different than the internet article would have you believe, I wanted to point things out. Interesting. Irradiating moist foods while frozen in the absence of oxygen significantly decreases overall chemical yields by about 80% So it's interesting you can irradiate something to 50 kilogray at minus 30 C and it does the same chemical change as 10 kilogray at room or chilled temperatures, but you do that much more damage to the organisms. So yeah, there you go, better to irradiate at cold temperatures. And there's a few other interesting conclusions. These radiolytic compounds, are they found in food otherwise? Virtually all the radiolytic products found in high dose irradiated foods are either naturally present foods or produced in thermally processed foods. Before food irradiation you had heat sterilization. In fact we still do for quite a bit. And folks, a lot of times we'll talk about the amount of nutritional decline, the amount of lack of nutrition from irradiated foods, and they'll just say it's a bad thing. That's not the right fact to see here. Food preservation tends to lower the nutritional content. And there's a few neat tables I want you guys to look at. In terms of macronutrients, do you lose the protein, the fat, the carbohydrates from in this case, gamma-irradiated mackerel, as a function of dose? Does anybody see a trend here? Take a sec to look at the numbers. AUDIENCE: Seems like it bumps in the middle. MICHAEL SHORT: It does seem like the nutritional content goes a little bit up with small doses, doesn't it? Would you necessarily believe these data at first glance or at face value? What's missing from this data set for you to draw a statistically significant conclusion? AUDIENCE: Error bars. MICHAEL SHORT: Error bars, right. If this is the graph of, let's say, protein content versus dose in gray and this is protein, the data appear to do something like this. If the error bars are like that then you can't draw any meaningful conclusions. So one would have to go back to the study to see hopefully they actually had error bars in their measurements. So what I can conclude from this is macronutrients basically don't change with up to the highest dose of radiation that we give to any food at all. What about the micronutrients? What about things like vitamins? They do go down somewhat, and they're pretty linear. They're fairly linear with dose. It's not much to be disputed there, is yes, irradiating food does destroy some of the vitamins and not the minerals. Why wouldn't gamma irradiation destroy minerals in food? What's an example of a mineral that you need for survival? AUDIENCE: Iron. MICHAEL SHORT: Iron. AUDIENCE: Calcium. MICHAEL SHORT: Calcium. Yeah, bunch of other elements or inorganic compounds. Why does food irradiation not affect mineral content? AUDIENCE: If you use low enough energy gammas it's just not going to change paratonic. MICHAEL SHORT: That's right. Yeah. You stay below 5 MeV gammas and there's literally no change in the elemental composition of those minerals. The vitamins, however, tend to be more complex organic compounds that can be damaged. And one of those big ones is thiamine, better known as vitamin B1. So irradiated food is a little bit less nutritious, but they give a pretty good explanation. Let's see. Think that's later in the conclusion, so we'll get to that. Conclusions on nutrition. Yeah, there we go. So in this case, what they're saying is, well yeah, it takes away some thiamine, but irradiated food does not constitute the major source of thiamine in the diet. So even though it does reduce the amount of thiamine that you get, it doesn't make a dent in your overall nutritional uptake unless you eat nothing but that irradiated food. Does anyone remember when the last case of scurvy in the US happened? Talking about single food diets. Happened right here at MIT. It was a while ago. I think by now it would have been over 10 or 15 years ago. There was a student that decided I'm going to have the cheapest food budget ever and live off nothing but instant packs of ramen. Now, this is already a nutritional nightmare, but it got worse and worse. So it was the ramen, the flavor packs and water, and that's just what this person ate the whole time. And then they decided, you know what? I don't really need the flavor pack because that's just a bunch of sodium, taking out whatever little micronutrients were left. And then the next logical step was why bother cooking it? And a few weeks later massive constipation and scurvy ensued, which is a disease you don't see anymore, from a deficiency of vitamin C. So if you're only eating one food to begin with you've got other problems have nothing to do with food irradiation. Yeah? AUDIENCE: I'm not even surprised that the last case of scurvy was an MIT student. MICHAEL SHORT: I'm not surprised at all either. I'm surprised Soylent wasn't invented here. It seems like the kind of thing that someone here would have done. Anyway, let's take a look. Ah, irradiation versus heat sterilization. From a nutritional viewpoint, irradiated foods are equivalent or superior to thermally sterilized foods. Why do you guys think that might be? AUDIENCE: They're really sterilizing it with denaturing most of the proteins. MICHAEL SHORT: Indeed. So you'd actually get some macronutrients to disappear if you start to denature or break down those, and even for micronutrients. The idea here is that the amount of micronutrient content will go down roughly linearly with dose. What happens when you reach the temperature of destabilization of those foods? AUDIENCE: [INAUDIBLE] dropped them. MICHAEL SHORT: Yeah, all of it goes away. Once it's temperature unstable, if you keep it at that temperature for long enough you'll destroy pretty much all the nutritional content. So if the choice is between do you heat or do you irradiate, irradiating does less damage in the end. Anyone here ever had an MRE, a Meal Ready to Eat? How do they taste? AUDIENCE: They're not great, but they're not awful. MICHAEL SHORT: But they do last for like decades, like many decades. There's an entire channel on YouTube of this guy that just eats MREs from further and further back. I think he's made it as far back as the Civil War and ate actual moldy hard tack from the Civil War. But my point here is that all the way back to definitely to World War II and perhaps beforehand, MREs were sterilized with heat. You put something in a metal and plastic lined bag to keep out all other organisms. You heat it for long enough to kill every single other organism and they last for decades. And the question is, do you want to eat what's in the bag? If you want some fun between studying for finals listen to this guy's reaction as he eats some of these 60 or 70-year-old MREs from World War II of the Korean War. AUDIENCE: Why would they subject themselves to this? MICHAEL SHORT: I don't know, for attention. AUDIENCE: [INAUDIBLE] gets stung by a really painful insect, and I will say why? MICHAEL SHORT: Yeah. At any rate, canned food you can tell has an awfully different taste to it. A lot of these cans are heat sterilized. They're pasteurized and the heat themselves generates a lot of off flavors that you can tell the difference between fresh and canned green beans in taste, texture, color, mushiness, whatever other qualifiers you give to food. So to those opponents of food irradiation, I'd say consider the alternatives, either heat sterilization or spending most of your waking hours in the bathroom. AUDIENCE: How do you like thermally sterilize meat without cooking it? I've always been really confused about that. MICHAEL SHORT: Yeah, you can't necessarily. There are also spores of bacteria that are incredibly heat tolerant. So you can't necessarily sterilize meat without cooking it. So the best thing to do is to cook it, sealed in a can, or sealed in something and then don't unseal it until consumption time. Yeah. There are other things that you can sterilize without cooking them, like milk. Milk is pasteurized. You heat it to a temperature that's sufficient to kill most of the microorganisms not to sterilize it, but to increase its shelf life so that it takes months instead of hours for that microorganism population to bloom and ruin the milk. I'm sure everyone here has smelled spoiled milk before, right? Doesn't matter how pasteurized it is. Has anyone opened a carton of expired milk after its shelf life? AUDIENCE: One time I poured it into my cereal. MICHAEL SHORT: Yeah. That was a study in two phase flow, right? Liquid, solid. Yeah. Yep. So there is some proof right there, a few bacteria survive. It's the same thing with food irradiation, you need an enormous dose in order to actually sterilize the food. So this is usually the only option for folks with extremely compromised immune systems in hospitals is if you want to give them actually sterile food that's actually palatable, you irradiate it to like 50 kilogray. And that kills just about every single organism including the long lived spores. One other side benefit of food irradiation is the cells that survive have been blasted by radiation. They tend to be a lot weaker and more sensitive to heat and pH and temperature-- heat, temperature, the same thing. --so that the cells that make it through the food irradiation are more susceptible to damage and then it helps make things a little bit safer. I'm trying to see if there's any other conclusions. Oh yeah, the old who cares about thiamine. It's unlikely however, that the irradiated foods of this type would constitute a large enough proportion of the diet to compromise the dietary requirement for thiamine. And this is coming from the World Health Organization. If there's any organization you think you can trust about health everywhere, it's the WHO, it's these guys. And I don't think we have time for more of the conclusions, however I will post this document up to the learning module site so you guys can peruse at your leisure along with the bookmarks, unless I've done it already. Want to open it up the last two minutes to any questions you guys may have about anything, including final logistics, how it's going to all come down, the review session on Friday, whatever you guys want. AUDIENCE: So the review is 9:00 to 10:00 on Friday? MICHAEL SHORT: Yep. The review is 9:00 to 10:00 or 10:30, whenever we finish on Friday. I'll email out with a room once I secure one. Yeah? AUDIENCE: Where do they usually do food irradiation? MICHAEL SHORT: There are gamma irradiation facilities where they've got these cobalt 60 or cesium sources. No, this would be processing centers. Yeah, I mean you can't normally own one of these giant cobalt sources so. These would be specialized processing centers. Yeah? AUDIENCE: So is this usually only done for foods coming in the US from outside? MICHAEL SHORT: Oh no. It can be done for foods grown within and for the consumption in the US too. If you want to extend its shelf life by small or large amounts you can do it for anything, but there are a number of different types of produce that can only be imported because of food irradiation. One of these that I was delighted about was mangosteens from Thailand. Mangosteens probably almost no one here has ever even heard of or seen. I'm surprised anyone has. OK, wow, two, that's a record. They're usually only found in South or Southeast Asia. They don't tend to last very long and they tend to be riddled with parasites. But one time out of two that I've opened up a mangosteen, a whole bunch of bugs started crawling out. You'd imagine that the US doesn't want that imported into here. But in 2005 Thailand started irradiating their mangosteens. They were approved for sale in this country and you can find them at H Mart now down the street. Even food from Hawaii has to be irradiated for consumption in the continental US. Why do you guys think that is? AUDIENCE: Human species. MICHAEL SHORT: According to the USDA, Hawaii is effectively a different country when it comes to the sorts of parasites you'd find in the food. It is a part of the US, but it is not agriculturally a part of the continental US. It's got its own unique parasites and pathogens and organisms and critters. So a lot of things coming from Hawaii have to be irradiated for consumption in the continental US. So yeah, food irradiation helps food commerce go around. Any other questions? Yeah? No, it's you. AUDIENCE: What percentage of food actually gets irradiated? MICHAEL SHORT: I don't know what percentage of food gets irradiated. It especially depend on the type. There's some things that don't need it. There's some things that do need it. I'd wager a guess to say that more imports get irradiated than domestic consumption stuff, but I don't know that for sure. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 4_Binding_Energy_the_SemiEmpirical_Liquid_Drop_Nuclear_Model_and_Mass_Parabolas.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: Want to give a quick review so that we can launch into some more technical stuff today. We started talking about this reaction for boron neutron capture therapy was the focus of today's lecture. And I want to get this one up on the board since I'm going to move to different slides later. We had boron 10 captures a neutron and becomes lithium 7, helium 4. There's a gamma ray. And there's going to be some q value, or energy either released or consumed, in this case, released in the form of the kinetic energy of the recoil products as well as the gamma ray energy. So let's also give a quick review of the Q equation since I think we covered that last week. If you remember, if you have a general system of, let's say, a small initial nucleus i firing into a larger nucleus capital I, after the reaction, off comes some small final nucleus and some large final nucleus. If I were to draw these arrows to scale, the little one would probably be moving faster. Then we can figure out how much mass and kinetic energy each of these nuclei are by just conserving everything. So we can write, let's say, the mass of nucleus i, c squared plus the kinetic energy of i plus the mass of big I c squared. Plus the kinetic energy of big I has to equal the mass of little f c squared plus the kinetic energy of little f plus the mass of big F c squared plus its kinetic energy. And what this tells us is if the total amount of mass, or the total kinetic energy change, they've got to exchange energy kind of equally. So we can write the difference in mass or energy by just, let's say, taking all of the mass terms and putting them on one side. So we can say that-- let's just say everything here is multiplied by c squared. We have M i plus M I minus M little f minus M big F has got to equal the sum of the final kinetic energies minus the initial ones, which we also call this Q value. And so by getting the difference in the masses or the kinetic energies at the end, you can figure out whether this reaction is exothermic or endothermic. If you remember, we said if Q is greater than 0, it's exothermic. If Q is less than zero, it's endothermic. In a nuclear reaction like a chemical reaction, you've got to input extra energy into the system beyond just the rest masses of the particles to make an endothermic reaction happen. For example, since we found out that this BNCT reaction is endothermic, you can make it go the other way, but you have to impart kinetic energy to one or both of those nuclei to overcome the Q value that-- let's see-- yeah, to overcome the Q value that you'd get. And I want to put up a couple of other terms just to quickly review. We started looking at the table of nuclides. We learned how to read it specifically to find things like excess mass and binding energy, the definitions for which I want to leave up here on the board. So we had the excess mass, which, again, doesn't have a physical significance. It's the difference between the actual mass of a nucleotide and the integer approximation of its mass just from the number of protons and neutrons, and the binding energy. Since some of you asked for a nucleus uniquely defined by its total mass number a and number of protons z. So let's say this is a functional quantity because you can have any nucleus with a certain number of protons and a certain number of protons plus neutrons would be the sum-- let's see-- the sum of its individual nucleons minus its actual mass, which also is a function of z and a, or its proton number and its total atomic mass number. So let's leave these things up on the board while we move into some new stuff. So back to where we were on Tuesday. We wanted to figure out, well, how do we calculate this Q value? You can do it any of three ways. You can use the masses, like we have over here. So the masses in amu, or atomic mass units, times c squared gives you the Q value in MeV. Or the difference in kinetic energies gives you the Q value in MeV, although you don't usually just know these off the bat, especially for the final products. Or you can use the binding energies. Because the binding energy is directly related to the mass additively, you can substitute in binding energies here and you'll get the same thing. And so if we have the table of nuclides, you can either use the atomic mass of any nucleus or its binding energy and just look it up directly. The thing with the fewest steps is just to use the binding energies because those are already given in MeV or keV. And then it's just an addition subtraction problem. If you use the masses, don't forget to multiply by c squared. And remember our conversion formula, which should not be rounded, which is this right here. Again, 931.49, not 931, not even 931.5, or else you're off by 10 kiloelectron volts. And so then the question is, all right, let's try to calculate the Q value from this reaction. I have the individual energies and kinetic energies up here. But just so we have a worked out example, let's actually do this. So the Q value of this reaction should be the binding energy of lithium 7 plus the binding energy of-- what do we have-- helium 4 minus the binding energy of boron 10 minus the binding energy of a neutron. Now first of all, the easy one. What's the binding energy of a low neutron? Let's call it out. 0, yeah. A lone nucleon is not bound to anything, so that's easy. That's 0. And we can just use the table of nuclides to look up the other three. Luckily I've got it live over here. So let's just punch these in. We'll have boron 10. And our binding energy right here is 64.75 MeV. Let's look up helium 4. Good. That's showing up on the screen. I think I'll make it bigger so it's easier for everyone to see. The binding energy is 28.296 MeV. And notice everything's already in MeV, so this is nice and easy to deal with. And lithium 7. 39.245 plus minus minus 0 MeV. Let's see. Actually, I think my sines for exo and endo are backwards. Let me fix that right now because the idea is if you release energy-- there we go. Sorry about that. No, wait a minute. Let's calculate this out, figure what we get. And again, I can't do six digits in my head, but I will do it as fast as I can here. 245 plus 28.296 minus 0.75. Ah, indeed. 2.79 MeV. I had it right the first time. So good. Shouldn't second guess myself. Cool. So now we know the total Q value of this reaction. And I'll bring the reaction back up here. We also know, which you can find from measurements, that the gamma ray comes off with an energy of 0.48 MeV, leaving-- let's see how much-- leaving 2.13 MeV for the sum of the kinetic energies of the lithium and the helium nucleus. So let's say Tli 7 plus t for helium. Now the question is how did I get to those numbers for the split between the kinetic energies of those two? Anyone have any idea? Yeah. AUDIENCE: Related to their mass? MICHAEL SHORT: Yeah. So it's definitely related to their relative masses. More specifically, we have this conservation of energy equation, but we've still got two variables and one equation. We need a second equation. That certainly does relate the mass. Yep. AUDIENCE: Conservation momentum. MICHAEL SHORT: You can relate their momentum. So we can say that if this initial kinetic energy of boron and of the neutron was approximately 0, then it's like these two nuclei, lithium and helium, were kind of standing still and all of a sudden moved off in opposite directions, which means they've got to have equal and opposite momenta. So let's say the absolute value of the momentum of lithium has got to equal the absolute value of the momentum of helium. This is our second equation, where we'll use the first one to figure out what are the relative kinetic energies. And so we're going to use a quick trick to say if momentum equals mass times velocity, we can also say it equals the kinetic energy, which is 1/2 mv squared, and then multiply by things in order to make it mv. So we can multiply by 2. Let's make a little bit of space here. If we take the kinetic energy, multiply by 2, multiply by m, and take the square root, we have t. Let's see, that would give us 2 m squared v squared inside the square root gives us mv. So now we can take these expressions and we can say that the root 2 mass lithium, t lithium, equals root 2 mass helium t helium. And they both have a square root of 2. We can square both sides of both equations. And we end up with mass of lithium t lithium equals mass of helium t helium. Now we take this equation, rearrange it a little bit. Let's just call that Q to keep things in variable space. And we can say that t helium equals Q minus t lithium. We can take this t helium, stick it in here. We end up with m lithium t lithium equals the mass of helium times Q minus t lithium. There's a missing h right there. There we go. And then from here-- oh good. We've got some blank space right here. Let's see. So we'd have-- I'll do out all the steps-- equals mass of helium times Q minus mass of helium times t lithium. And we're solving for t lithium, so we can put this term over on the other side. So let's say t lithium times the mass of lithium plus the mass of helium equals Q times the mass of helium. Then we can just divide both sides by the sum of those masses Those two terms cancel, and we're left with the expression for the kinetic energy of lithium, which is the mass of helium over the sum of the masses times the total Q value. Looks like the two of those might actually be backwards, huh? Because the lithium one should be smaller. I'll correct that for the notes when I put them up online because this ratio should be smaller than 0.5 Q. But at any rate, this is how you actually get them. And I want to point out something a little flash forward to some of the decay that you're going to be looking at in terms of nuclear decays. Let's talk for a second about alpha decay. In alpha decay, you have a low nucleus sitting around. And then all of a sudden it emits a helium nucleus as well as a recoil nucleus. And one of the questions that came up a lot last year is why isn't the Q value of an alpha decay reaction-- let's just write one up on the board, possibly one that you'll be dealing with very soon, hands on. Uranium 235 can spontaneously go to a helium nucleus. That's 92 plus looks like 90 to 31. What comes before uranium? I think that is thorium, although I wouldn't-- yeah, I think that's thorium. And the idea here is that there is some Q value associated with the kinetic energy of both of these. But it's not the alpha kinetic energy because the thorium nucleus would have to take away some of that kinetic energy. And I want to show you this on the diagrams. So let's look at U 235. And we can see that it has an alpha decay to thorium 231. Awesome. Got the symbol right. And it has a decay energy of 4.679 MeV. Let's take a look at the diagram, which actually lists all of the possible alpha decay energies, of which there are many. Had to zoom out a little bit there. Yeah? Anyone have a question? OK. So notice that the difference in energy levels is 4.676 MeV. And if we look at the highest energy alpha ray, it's less than that. And that's because just like we showed up here, the thorium nucleus has to take away some of that kinetic energy to conserve both energy and momentum. So this is a question that came up quite a few times last year, and I want to make sure you guys don't get tripped up like this. So again, I think I've said it every single day, and I'll say it again because it's the next day, is make sure to conserve mass energy and momentum. That's the whole theme of this class. Yes. AUDIENCE: How do you get that value for [INAUDIBLE] mega electron volts? MICHAEL SHORT: Should have been 2.79 minus-- oh, did I do a little mental math mistake? That should be-- oh. Yeah, no. A little dyslexia thing. 3, 1. Yeah. There we go. Thank you. Cool. OK. Is everyone clear on how to calculate Q values from nuclear reactions using either kinetic energies, which you won't typically know, or masses or binding energies, which you can look up directly from the table of nuclides? Yeah. AUDIENCE: So what did you do down there in the bottom right corner of the chalkboard? MICHAEL SHORT: Of this one? AUDIENCE: Yeah. MICHAEL SHORT: Yep. So I took this expression right here, which is to say the Q value has got to be the sum of the kinetic energies of the lithium and helium nuclei, rearranged it thusly to isolate-- ah, there we go-- yep, to isolate the helium kinetic energy, and then substituted that expression in here to get this one right there. AUDIENCE: OK. MICHAEL SHORT: Yeah. So this way, we have-- in this case, we had, let's say, two variables and three unknowns. But because we have this equation relating them, we're left down with two variables, two unknowns, and we can actually solve this thing. Yep. Yeah, good question. Yes. AUDIENCE: The energy of the gamma, is that just a known? Like the .48 MeV? MICHAEL SHORT: That's something either I tell you or you would measure, let's say. That's just for completeness to say all right, this reaction actually gives off a gamma, and I want to give the right value for the kinetic energies. And we'll get into what gamma transitions are allowed and then how you measure them in the next couple of weeks, actually. Yes. AUDIENCE: So do we refer to Q as the Q you calculated up there, or that 2.31? MICHAEL SHORT: They're the same one, actually. AUDIENCE: The 2.7 MICHAEL SHORT: Oh, I see. That's a good point. So this wouldn't really be Q, would it? But it is the sum of the kinetic energies. This is like Q minus the gamma ray energy. Let me stick that in there. L, i. Yep. Yeah. Good point. OK. Any other questions before I move on? We're going to get into a universal formula to predict in a so-so way what the binding energy of any given nucleus will be and start looking at stability trends so you can predict, just from the number of protons and the number of neutrons, how stable a nucleus will be with a few exceptions, which we will go over. And this is what's referred to as the semi-empirical mass formula. So I'm going to erase some stuff. Has everyone got the notes on this bottom board right here? OK. Let me know when you're ready, and let's see. Yeah. I want to make sure I move at your guys' pace. Let's say going to have a graph of binding energy per nucleon on versus nucleons. Well, anyway, I'll leave that up there and I'll do the work on this board right here. So let's say we wanted to figure out a weighted graph or to predict the binding energy per nucleon. So I have this binding energy term over a, where a is the total number of nucleons, as a function of the number of nucleons-- in a generalized way. Not accounting for magic numbers or anything else that we'll get into pretty soon. And I don't like the term magic numbers, but that is the parlance that's used in this field so I'm going to stick with it. Let's try and think about if you imagine the nucleus as a kind of drop of liquid-- and one of the other words for the semi-empirical mass formula is called the liquid drop formula or the liquid drop model. It assumes that the nucleus takes the shape roughly of a liquid drop, and you can kind of treat some of the energy terms accordingly. This is why I have all the different colors of chalk out for this. Makes it a little visually easier to see. So let's start writing a general expression for the binding energy as a function of a and z, and start thinking about what sort of terms would add to or decrease the stability of a given liquid drop nucleus, where all the nucleons are just kind of there in some sort of floating, crazy, coulombic, strong nuclear force soup. First of all, as you add nucleons to a given nucleus, what tends to happen to the binding energy, in general? Without knowing anything else. Assemble more nucleons, you convert more mass to energy. And you end up increasing the binding energy. So let's call this the volume term. As you increase the volume of this liquid drop, its total binding energies start to increase. So let's say there's some term that's going to be proportional to A, the number of nucleons that's in this liquid drop. And we'll draw this. Let's see if I can do the trick right. Yes. I love doing that. If we were to graph binding energy per nucleon as a function of number of nucleotides for this term, it would just be a flat line because it's related to A. And there's going to be some constant, which we're going to call the volume constant, that says, well, there's going to be some relation between the actual amount of stability gained and the number of nucleons. We don't know what it is yet, but what we're really concerned with is the functional form of this thing. It's proportional to A. Next up, what also happens to a liquid drop as you increase its volume? What other parameters do you increase? AUDIENCE: The surface area? MICHAEL SHORT: Exactly. The surface area. The idea here is that if this liquid drop is made of all sorts of different nucleons-- and let's pretend that they're like atoms in a crystal and they're all binding to each other-- the ones on the outside aren't bound to as many nucleons as the ones on the inside. And so the more nuclei there are near the surface as opposed to inside the liquid droplets, say, inside some little radius where all it sees around it are other nucleons, then they're not quite as bound. And how does the surface area of a liquid drop scale with its volume? To what function or to what exponent? 2/3. I mean, let's take a quick look at the volume of a sphere is 4/3 pi r cubed. And the surface area of a sphere is 4 pi r squared. So if you want to get some expression for how does area scale as volume, I said cube, then I wrote squared. It's going to end up looking like something times r to the 2-- let's see. Oh, yeah. I'm sorry, that's not the expression I want to write. But the idea here is it's going to scale with r to the 2/3. So let's pick a different color and say we're going to have some surface term times number of nucleons to the 2/3. And if we then adjust this formula to also take into account this surface area term-- which is to say for very small nuclei, there's a lot of nucleotides near the surface, and as the nucleus gets bigger and bigger, more and more of them are in the juicy center and don't know they're near the surface-- we'd have some modification that looks like that. Now I'm going to erase the stuff over here because I'm running out of room. Now these nucleons aren't just untagged, anonymous nucleons. They're either protons or neutrons. And what happens when you try and cram a lot of protons into one space? AUDIENCE: They want to repel each other. MICHAEL SHORT: Yep. They want to repel each other by coulombic forces. And so every proton-- let's pick a different color for the coulombic forces-- and that should be a minus if I want to stick with all the notation. There's going to be some other term to account for the fact that the nuclei, specifically the protons, are trying to repel each other. So in this case, it's going to be proportional to, let's say, the number of protons that we have. And every proton should feel a repulsive force from every other proton. So let's say it's times z times z minus 1, so that every proton feels the force of every other proton except for itself. And that's going to be mediated by the total number of nucleons. So if there are more neutrons in the way, it won't be quite as bad. And there's going to be some other-- we'll call it a C for the coulombic term-- and that will say that as you make a bigger and bigger nucleus, you start to get more and more coulombic repulsion trying to rip it apart. So if we were to then modify the purple curve-- oops. Trying to get it to go the same as the nucleus gets bigger. I want to make sure it's really to scale-ish. As the nuclei get bigger and bigger, it's going to be a little less stable. And already we're starting to get a curve that is getting close to looking like the binding energy curve from the reading. But there are a couple more terms to reckon with. So let's pick a fourth color. What other sort of trends that you notice in the reading about the stability of different nuclei? Let's say you were to take a common nucleus like carbon-12. It's got six protons and six neutrons, and it's exceptionally stable. What about carbon 6? A nucleus of just 6 protons? Doesn't exist. Exceptionally unstable. What about carbon 24? 18 neutrons, six protons. Sound stable or not? Not at all. So there's some sort of asymmetry term going on. When the number of protons and neutrons is roughly in balance, especially for light nuclei, the nucleus tends to be more stable. So we can write some sort of term-- let's call it an asymmetry term-- that relates to the number of neutrons minus the number of protons. And in this case, for reasons I'm not going to get into, but are derived in a reference in your reading, there's a squared on it. But suffice to say if the number of neutrons and number of protons are equal, then the nucleus is predicted to be pretty stable. And this works out quite well for light nuclei. It starts to break down a little bit for heavier nuclei. And then divide by the number of nuclei that there are. I also see a missing 1/3 because let's say this nucleus has a volume that scales with roughly the number of nucleons. Then the distance of that coulombic force is going to be like A to the 1/3 or the radius of this nuclear drop. So let's take the asymmetry term. That's going to give us a further modification slightly downward. And finally, there's what's called the pairing term. What's the last color I haven't used? This pairing term delta. And this is not a smooth function. It's a piecewise function that depends on whether you have an odd or an even number of each type of nucleon, protons or neutrons. And so what this means, it's going to add a little bit of jaggedness to the beginning of the curve and equal out in the end because this delta term can be something like plus, let's call it an A pairing. And it scales with the square root of A, or minus, or 0, depending on if the nuclei are odd odd, like odd number of protons, odd number of neutrons, even even, or odd even. Now I know that the derivation is a little hand wavy. That's why we call it semi-empirical. We're taking each of these additive terms and saying it kind of comes from a fairly OK, a little poor approximation of the nucleus. But what we end up with is a formula whose constants are fit, whose terms, the actual variables, are derived somewhat from physical intuition. These constants were then fit later by some other folks, and the references for this are in the reading. They're all in MeV, and this gives you a binding energy in MeV for a given nucleus. Now it works some places and it doesn't work in other places. Yeah. AUDIENCE: So the lower case a [INAUDIBLE] no matter what the nucleus looks like? MICHAEL SHORT: That's right. So this is the universal, semi-empirical, usually works formula for the binding energy of a nucleus. So the constants don't change because the variables here are z, [INAUDIBLE] a, and n. And don't forget-- because you'll need to remember this on the homework-- that A equals z plus n. So for example, if you want to express what is the most stable nucleus, you could take the derivative of this formula with respect to A or z. And don't forget that you can substitute this expression into there. That's giving you guys a hint for the homework. And let's look at what this actually looks like as far as theory compared to experiment. So the red points are theoretical predictions. The black points are experimental predictions. And all of the different nuclei are shown here. First of all, the curve looks quite a bit like the one that we just hand wavy made on the board right here. And second of all, there aren't too many exceptions. It's hard to see what the exceptions are, so it's a little easier to draw them in terms of relative error. So you can see where does this formula work and where does this formula not? So if you notice, for the small nuclei, approximating it as a liquid drop is not a very good approximation because you can't treat this as a homogenized, smeared liquid drop. It's much more-- well, there's either two or three nucleons, and very few protons and neutrons in each. But then as you get to larger and larger nuclei, it starts to hit very close with a few exceptions that I want to point out right now. If you zoom in on that part, you can actually see that at certain neutron numbers, or certain proton numbers, there is an exceptionally high stability of a lot of those nuclei. And that's as you start to approach these what's called magic numbers, or numbers of nucleons which, say, fill all energy levels at a certain level. And again, it's not for every nucleus as a function of neutron number. But even drawing an envelope around this curve, you can see that the nuclei around 82, around 50, around 28, are a whole lot more stable than the ones in between. And this pattern kind of repeats with larger and larger periodicity. And it kind of looks like right here, at the edge of our knowledge of nuclei, we haven't quite gotten to the next peak yet. This is something we're going to talk about Friday on the quest for super heavy elements, or SHE's, as you'll see in the reading. Yeah. AUDIENCE: So the most stable nuclei peaks were closest to 0. MICHAEL SHORT: Closest to 0 is the closest agreement between experiment and theory. So the ones that are exceptionally stable, which are not predicted by this very simple formula, are up here at the peaks of these magic numbers. And actually, I want you to take a look right here at some of these very small nuclei. Like helium 4 is probably way up here somewhere, all the way over on the right. It's an exceptionally stable nucleus that is not very well approximated by liquid drop model because it's got four nucleons. They're all on the surface. There's nothing on the inside of a helium nucleus, let's say. And then if you look at stability trends in terms of are the nuclei more stable if they have odd numbers or even numbers, you can graph the two separately and look at the number of stable nuclei that have an odd total mass number or an even total mass number. And there's a few things to note here. One of them is the even numbers tend to have a lot more stable nuclei. This is something I mentioned on the second day of class. If you look at the [INAUDIBLE] table of nuclides-- let's go to their home page-- and just look at it sort of in a color way. The blue colors are stable nuclei, and you notice that every other row of pixels here has a whole lot more stable ones. And that's the same thing that we're seeing right here, is that there's a lot more even nuclei that are more stable. If I jump back to our semi-empirical maths formula, notice that this binding energy goes up for even, even nuclei. So when there's an even number of protons and an even number of neutrons, the semi-empirical mass formula does predict an increase in stability, which you can actually see on the table of nuclides, and on this sort of stability trend. And so let's look a little closer and see how many nuclei for each proton number or each neutron number are actually stable. And we graphed the odd and the even ones separately. And what's important to note here is one, the odd is way lower than the even. There's usually either 2 1 or 0 stable nuclei at that number. And what other sort of features do you guys notice about this? It's not smooth, first of all. Where are those peaks? Where do you tend to find that most stable nuclei? 4? Where do you tend to find the least? What about these two right here? No stable nuclei at these proton numbers. And remember, proton number uniquely defines an element. Anyone know what these two might be? What sort of elements? Look to the back of the room if you want. There's a periodic table in the back wall. And you can see, except for the super heavy things down at the bottom, there's a couple of elements that have no stable isotopes. These are technetium and promethium, which are relatively light isotopes-- I say relatively light compared to things like uranium-- with no stable isotopes. They're also fairly far away from these so-called magic numbers or other regions, where you tend to have a spike in the number of stable nuclei due to-- well, things that you'll learn in 22.02, in terms of nuclear shell occupancy and stability. But you see the same thing when you graph the neutron number. You can see a couple of sudden spikes right here at 20, 28, 50, 82, and 126. When everything gets really stable, all of a sudden you've got one last gasp of a stable isotope before you go off into nowhere land. So let's start looking at relative stabilities of nuclei, let's say, for a given mass number or a given proton number. Anyone mind if I cover this board because you can't roll it up. You all got the notes from here? Cool. That one has less to erase. And I want to keep these formulas up for our reference. Let's say I pose this problem. I want to find out, make sure I solve the right one-- actually, I'm going to check my notes real quick-- that what is the-- for a given A, or for a given mass number-- what is the most stable number of protons? for A, given A. This is the question that I like to answer here. How would you approach this question using the semi-empirical mass formula that-- well, you can't see here, so I will bring it up back on the screen. Here. How would you find this out? Well, let's say for a given mass number A, for a given poor approximation of the total mass of a nucleus, the more binding energy it has, the more stable it is. Therefore, if you want to find the minimum of a mass for A and Z, given a fixed A, that will give you the most stable nucleus because it will tell you which value of Z gives you the smallest m, or rather, the most tightly bound nucleus that has the most binding energy. So let's start writing this out. First of all, we can use one of the two equations we already have up here, a relation between the mass and the binding energy. The second one, well, we have it right here. So let's substitute in our binding energy equation and express it in terms of mass. So let's say our mass of a nucleus A and Z is equal to Z times the mass of hydrogen plus A minus Z times the mass of a neutron minus the binding energy because in this case, what I've done right here is I've added mass to each side of the equation, subtracted binding energy from each side of the equation, and we can just take negative that expression and write it all out together. So we have minus AVA plus A surface A to the 2/3 plus AC z times z minus 1 over A to the 1/3, and plus AA symmetry and minus z squared over A and minus delta. So we've got one expression for the total mass. We've fixed the value of A because we're going to take some fixed-- we're going to choose some fixed value of A. Let's say A equals 93. And that's the example that I've kind of worked out in my head. It so happens that niobium has a stable isotope that a mass of A equals 93. And we just found out in some of our research that niobium doesn't stick to chromium very well. That's why I've got it on the brain. So this is what I was thinking about this morning. So for a fixed A equals 93, we want to find what is the most stable A. How do we do that? Anyone have an idea? Yeah. AUDIENCE: Differentiate? MICHAEL SHORT: Differentiate. Sure. Let's take, we'll just say the derivative of-- oh, no, it is a partial derivative because we have two variables here. Take the derivative as a function of z, set it equal to 0. This will give us the z number that gives us the minimum mass for a fixed A. So let's actually do this right now. So let's see. This gives us mh. And this term expanded out is AMN minus ZMN. So we have a minus MN. I'm going to make one quick correction right here. I want to make sure everything's in the same units. All of these semi-empirical mass terms are in MeV. These right here are in AMU, atomic mass units. What do we have to add in order to get these all in MeV? AUDIENCE: [INAUDIBLE] the conversion factor. MICHAEL SHORT: Conversion factor, yeah. Or in this case, we'll just stick a C squared. I'll make a little bit of room so I can stick the C squared in there. C squared. Now everything's in MeV. We're all in the same units. So let's say we have mh minus mn. These are in C squared. And minus VC plus 2/3 AS times A to the negative 1/3 plus-- I'm going to expand this out to call it z squared minus z. I'm also going to stick in n equals A minus Z so that this expands out to A minus 2Z squared. Is everyone with me here? Yeah. AUDIENCE: Shouldn't the A terms, like the AVA and the AS, stay at 2/3? MICHAEL SHORT: Oh. You're right. I'm deriving with respect to the wrong variable. Thank you. Yep. So we want to do this as a function of z. So that term disappears. That term disappears. Thank you. Let's work on these ones right now. So we have AC time z squared over A to the 1/3. So that will give us A plus AC over A to the 1/3 times 2z. And then we have-- let's see- minus AC over A to the 1/3. That's it, actually. Oh, times plus 1, OK? And then we have the A minus 2z. So let's expand this out just so we can see it all on the board. So we have AA over A times A minus 4z squared plus 4 minus 4 AZ plus 4z squared. So let's take the derivative with respect to z of that. That term goes away. That becomes 4A. So we have plus AA over A times 4A. And there's a minus sign there. And that becomes 8z. So we have plus AA symmetry over A times 8z. And the delta term goes away because there's no z dependence. And what we end up here is the solution for what is the most stable z as a function of A? This is a linear equation. There's only one solution for it. If we actually want to graph this m as a function of A and Z, we end up with what's called a mass parabola, which is to say you can graph the binding energy per nucleon, or the mass, or pretty much similar things, of a nucleus, of all nuclei with A given as a function of z. Think I can do this on the remaining space right here. So let's say 4A equals 93 if this is Z and this is m as a function of A and Z. Let's actually look at a concrete example. So let's go live to the chart of nuclides and start looking at things with a mass number of 93. Looks like I clicked a little too high. There it is. Let's see. Moly 93 I was looking at, and that becomes niobium 93, which is the stable isotope I was thinking. So let's put niobium right here. I haven't given an actual scale to this because I just want to show you in sort of relative terms. So let's say niobium is the stable one. So it's going to have the lowest actual mass, even though it's got an A number of 93. If you look here on the chart of nuclides, you can see it could have come from a couple of different places, either from zirconium 93 or from molybdenum 93. And now is a good time to start introducing these different modes of decay so you can figure out, well, how would a nucleus decay to get to the most stable place? Let's say it came from zirconium 93 and-- let's see, miobium has a proton number of 41. So if we go to zirconium, it beta decays to niobium 93 with an energy of .091 MeV. Very, very close. So we'll draw it slightly higher. That's about 91 keV. Zirconium 93 could have come from the beta decay of yttrium 93 you can see right here. So let's go up the mass parabola and keep exploring. And now we see that yttrium can decay by beta decay with 3 MeV. So if we put yttrium on this graph, it would be way higher. Yttrium itself could have come from-- well, let's see, strontium 93 with the decay energy of 4.1 MeV. So let's put strontium here. And I think 4 MEV would be, like, off the chart. But whatever. That's the way we drew it. Already we've got the makings of a parabola. And each one of these can decay by beta decay, or does decay by beta decay, in order to get to the most stable nucleus. So let's write the nuclear reaction for beta decay of one of these, let's say, from zirconium to niobium. So we'd have 9340 zirconium spontaneously goes to 9341 niobium plus a beta and plus an electron anti-neutrino. That's the part I don't expect you to know yet. But that's the whole energy conservation thing. A little bit of a flash forward. The beta decay energy is not necessarily the energy of the electron that you will measure because some of that energy is taken away by the anti-neutrino. But we'll get into how those relate probably next week. So let's now look on the other side of the parabola and confirm that the semi-empirical mass formula, which predicts something parabolic here and here with respect to z, actually checks out. So let's back up to niobium 93 and notice that it could also have come from electron capture from molybdenum 93. So let's put molybdenum right here. And it decays with an energy of 0.4 MeV into niobium, which let's say it's around here. Let's keep going through the chain. Anyone have any questions so far while we keep going? Cool. Let's trace it back up the chain. Technetium 93 can beget molybdenum 93 by electron capture with a much higher energy, 3.201. I'm going to extend our graph because we need the space. Technetium, another 3 MeV. And let's go one more back. Technetium can be made by electron capture from rubidium 93 with an even higher energy, which means a higher difference in mass between these two. So so far-- let's see, what's 6 MeV for rubidium here? It would be like there, I guess. There's our mass parabola, right from the data. So I like doing this better than just showing you a diagram because you can actually try it for yourself. Pick a fixed A, change A, and construct the mass parabolas yourself. Now the question is how could these decay into niobium 93, which is the stable isotope? I have negative one minutes left, so I'm very quickly going to tell you for large energy changes, it can either be positron decay or electron capture. And we'll go over what these modes of decay are next week. This can be, again, positron or electron capture. And for small amounts of decay energy, it can only be electron capture because in order for positron decay to happen, you have to be able to create the positron. And the positron plus the extra electron ejected to balance charge has got to be 1.022 MeV, or same thing as what's known as two times the rest mass of the electron. I'm going to stop there, and we'll pick up with lots of examples and questions tomorrow. The last thing-- well, I'll go over the next problem set tomorrow. I want to make sure everyone's seen it seven days before it's due. And the best way to do that is to show it on the board. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 23_Solving_the_Neutron_Diffusion_Equation_and_Criticality_Relations.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So Tuesday, we developed the largest equation that you'll probably ever use at MIT. Thursday, we destroyed it down to a rather manageable size. And today, we are going to solve it and actually show you guys how to work out different reactor problems all the way from simple one-group homogeneous reactors to expanding intuitively, not directly, from the mathematics to solve, or to pose and solve, two-group reactor problems like the one they did on the AP 1000 reactor where they separate the neutrons into a fast and a thermal group. And I wanted to put up where we left off. We have some equation describing fission, n, n reactions, photo fission, absorption, and leakage or diffusion as the big balance equation for how many neutrons are there in some reactor. And I think last time, I had some example reactors up on the board ranging from an infinite slab with a thickness a, to a cylinder with-- we'll call it a thickness a and a height z. And the question that we want to be able to answer is, if we draw a graph of flux versus x through this reactor, and in this case, it would be r and z, what do these functions look like? What is the form of this flux? And so today, from this equation, we're actually going to solve it. So bear with me. There'll be maybe 20 minutes of remaining derivation, and then I'm going to teach you how to use it. So today's class is going to be more like a recitation of how do you actually use this equation instead of getting to it. First of all, I want to simplify things, which is going to make it-- de-escalate. If we simplify things and don't worry about these NIN reactions and photo fission, we have the equation that you'll actually see in your reading. I'm going to drop them for now-- and I realize I'm missing one little gamma-- because they're just extra terms. They were instructive in writing the neutron transport equation because all the terms looked similar, but now they're just kind of extra things for us to write. And the last thing that isn't in units of flux is this Laplacian operator, for those who don't remember. And this Laplacian operator takes different forms in different dimensions and different coordinate systems. For 1d Cartesian, it's pretty easy. It's just double derivative, let's say in x. For cylindrical, it's significantly uglier, plus d squared over d r squared. So this is what the Laplacian operator looks like in the case of a finite cylindrical reactor. So first we're going to focus on the infinite slab case right here because it's a lot easier to solve, analytically. And then we'll show you how it would go for the cylindrical reactor which looks a lot more like the reactors you'll see everywhere else in the world. So let's start doing a little bit of rearrangement and isolate that Laplacian, so we can then subtract d del squared phi from each side. OK. And that cancels those out. And then we've got something on the left hand, is in del squared phi, and on the right side has all in units of flux. So then we can divide everything by the flux. And if we then cancel out all the flux terms, all two of them, we're left with something awfully simple. And the last thing we can do is divide by d on both sides. And I'm sorry. I had to cancel that flux because that's the way that goes. The d's cancel, and I'll redraw what we have right here. So we have minus del squared flux over flux equals constants. And what are they actually? It'd be 1 over k times new sigma fission minus sigma absorption over d. Everyone remember why we're putting these bars on the cross sections in d and everything? Can someone just tell me? AUDIENCE: Averaged? PROFESSOR: That's right. We've averaged overall energy, so some average cross-section would be the average from our minimum to our maximum energy of that cross-section as a function of energy times flux, over the range itself. So by this, we're saying, we're averaging the cross-section somehow over the whole energy range. Now, I ask you guys, what sort of functions in Cartesian space happen to have its double derivative over itself equal to constant? AUDIENCE: Exponentials. PROFESSOR: Is what? AUDIENCE: Exponentials. PROFESSOR: Exponentials, or sine and cosine. You've basically said the same thing two different ways, yeah, because the plus and minus exponentials can be rewritten in sines and cosines. So if we now assume that our flux solution has to take the form of, let's call it a cosine bx plus c cosine, what's the next constant? Fx. I'm sorry, that's a sine. I don't want to use d again for obvious reasons, and e stands for energy. So that's the next letter that we haven't really used. The flux has got to take this shape or else this condition is not satisfied. So this is the easy way of solving differential equations, which is guess the solution from previous knowledge or experience or something. So based on this, if we were to draw a flux profile, let's say, right here was x equals 0, one of those terms has got to go away by reasons of symmetry. What do you think it'll be? Which of those halves right there is symmetric about the x-axis right here? Sine or the cosine? AUDIENCE: Cosine. PROFESSOR: Cosine. Yep. The sines, it can be inverted around your coordinate system, but it's not symmetric. It's not a mirror image. So actually, the sine term goes away and we've got to have some solution which looks like a times cosine b of x because right here our d flux dx should equal 0 at x equals 0. Now, I've intentionally drawn the flux not to go to zero at the very edge of the reactor. If the flux automatically went to zero at the edge of the reactor, there'd be no need to shield it, right? So there are some neutrons streaming out of this reactor. And this distance right here is actually equal to two times the diffusion constant. Let me get rid of the stuff we don't need anymore. This is what we call the extrapolation length. And now I should also mention, I'm not going to derive it because I think we've done enough deriving for one week, but I'll just give you the form. This diffusion constant can actually be expressed in terms of other cross sections where this mu nought is what's known as the average scattering angle cosine. And it approximately equals 2/3 times the average atomic mass of whatever it's scattering off of. So this d that I've just introduced from some physical analogy actually has an expression from cross sections and the material properties that you can look up. So we've now turned it into some sort of condition where everything can be looked up in the Janis library, or whatever cross section library you have, plus whatever number densities you actually have in your reactor. So the picture's starting to get very real and very physical. So now, if we assume that phi takes the form of a cosine bx, let's plug it in. So if we rewrite this expression-- I think I'll need another color for a substitute-- so we'll take a cosine bx and stick it in there and stick it in there. And we'll end up with, minus del squared cosine is going to look like a times b squared cosine bx over a cosine bx equals those constants. It's starting to get very simple. Keep the bars on there because they're quite important. And if we cancel things out, the cosine bx's go away, the a's go away, and we're left with whatever is inside that cosine. B squared equals a bunch of material properties. There's no information in here about the geometry of our reactor. There's only the material properties, whereas over here, this constant b, I'm going to add a little g to it which stands for geometry. And we've now set up a condition where if you know the geometry and the materials in the reactor, you can solve for the final unknown which is its k effective. How critical or not is this reactor? So what would b have to be to make this cosine valid? So what would bg equal if we have the form of this flux like so? I'll give you a hint. If the cosine goes to zero right here at some reactor length, a over 2, plus some extrapolation length, 2d, then how do we make the cosine equal to zero at this point? What does bg have to equal? AUDIENCE: Pi halves over 2d? PROFESSOR: Quite close. Bg would have to equal pi over a over 2 plus 2d, such that when you substitute-- or over 2, right, yeah. Oh, yeah, OK-- so that when you substitute x equals a over 2 plus 2d into there, this cosine evaluates out to zero. Does that makes sense? Cool. So now, we have bg. There's nothing here but geometry plus a little bit of extrapolation length right here. Yeah? AUDIENCE: Shouldn't there be an over 2? PROFESSOR: Oh, yeah. There should be an over 2, so that when you plug in x equals a over 2 plus 2d, you get pi over 2, and cosine of pi over 2 is 0. Yep. Cool. So now we have our bg. So we substitute that back in there. Let's just continue to call it bg since we have it over there. And we have this much simpler expression, bg squared equals 1 over k mu sigma fission plus sigma absorption over d. Then, where's our rearranging color? We can multiply everything by d. And let's see. That should have been a minus. Copied myself over wrong. The d's go away. You can then add sigma absorption to each side. And then those go away. And then, you can multiply everything by k, and those go away. And finally, divide everything by sigma a plus d bg squared. And what we're left with is our criticality condition. Our k, our criticality, is pretty simple, nu sigma fission over absorption plus leakage. So finally, after all that derivation, we've arrived at some intuitive result. Remember we said that k, the criticality, or the k effective, is the ratio of gains to losses of neutrons. So that's exactly what we have. The only gain mechanism right here is by fission, and the loss mechanisms are either absorption by the stuff in the reactor or leakage outside of the reactor. And so this is actually how you tell when the reactor's in perfect balance, is if this condition is satisfied, and if it equals 1, then the reactor is critical. So now, we can start to play with this. And let's say we started off with a critical reactor and all of a sudden, we were to boost the absorption. What should happen to k effective? If you start observing more, right, it should go sub-critical. Now, mathematically speaking, that means this denominator gets larger. So this ratio gets smaller. And therefore, k effective has to go down, as well. Now that's an easy case. Let's start exploring some of the more interesting ones. Let's say you raise the temperature in the reactor. Do we necessarily know what's going to happen next? Let's work it out. So most cross sections, when you raise the temperature, will actually go down in value due to a process called Doppler broadening that you'll learn about in 22.05. But suffice to say for now that cross sections tend to go down with temperature. The most important reason why is because a cross section is a number density times a microscopic cross section, and if the temperature goes up, then the density goes down, and the number density goes down. And if the number density goes down, the macroscopic cross-section goes down. The atoms just spread out from each other. So regardless of what happens at the microscopic cross-section, which I'll leave to Ben and Cord to teach you next year, we know that the macroscopic cross-section goes down because it gets less dense. So let's try and work out now, what would happen to k effective? So what will happen to nu if we raise the temperature? Nothing, let's hope. What happens to sigma fission? This goes down a bit. But what about sigma absorption? Sigma absorption is going to go down. Does bg change? Has the geometry of the reactor changed? Probably not. Might have thermally expanded by a few nanometers, but let's just say, it doesn't change at all. What about the diffusion constant? Let's work that out. Sigma total is going to go down. Sigma scattering is going to go down. So probably what's going to happen is, this diffusion constant is going to go up, which means that if the atoms spread out more, neutrons will move farther, on average. Hopefully, that makes intuitive sense, because if the cross-sections go down, then a neutron can move farther before an average interaction. So the diffusion constant is probably going to go up. Which way does k effective go? You're correct. No one said anything because you can't really say anything. So it depends on the relative amounts that these increase or decrease. So depending on what you choose for your materials, you can have what's called positive or negative temperature feedback, which means in some conditions or scenarios, what you want to happen is that if the temperature goes up, k effective should go down, but not necessarily so. Depending on what you use, you can actually have situations where raising the temperature raises k effective, and that is some seriously bad news and is actually outlawed. You can't design a reactor with positive temperature coefficients. So this is the first little taste of reactor feedback is, now that we've written this criticality condition, we can start to explore what happens when you start probing the reactor. So let's say, what happens if you just add more reactor? In this case-- where's my green? All the way over there-- without changing the materials, what happens when you make the reactor bigger? What increases, decreases or stays the same? Let's just work it through. Does nu change? Sigma fission? No. Sigma absorption? D? How does bg change? Bg decreases, and as you'd expect, if you add more reactor to your reactor, the k effective should increase. And so this, hopefully, is starting to follow some intuitive pattern. With a given criticality condition, in some situations, you can work out, will the reactor gain or lose power? Speaking of, where's the power? Where'd it go? Yeah? AUDIENCE: The kinetic energy of neutrons? PROFESSOR: So yes, the power comes from the kinetic energy of neutrons, but where did the power go in our expression? Yeah? AUDIENCE: Power's not dependent on criticality. PROFESSOR: That's right. That's exactly right. And it follows directly from the math. This a got canceled away. It doesn't matter. You can actually have what's called a zero power reactor. So the power of the reactor and its criticality are not necessarily linked. You can have a reactor that is critical while producing tons of power. You can also have a reactor that is critical while producing-- I won't say zero, but an infinitesimally small amount of power. And they actually have built these. They're great test systems for testing our knowledge of neutron physics because you've got a reactor that's producing maybe 10 watts of power. It's easy to cool by blowing a fan on it, let's say. But you can still measure the neutron flux in different places and test how well your codes are working with a much safer configuration than sticking probes into a gigawatt commercial reactor. Yep? AUDIENCE: So [INAUDIBLE] steady state reactor, how are you [INAUDIBLE] if it's not really at the steady state? PROFESSOR: That would push the reactor out of steady state. Indeed, so on Tuesday, we're going to start covering transience, and if k effective become something other than one, the reactor is no longer in steady state. It's not in equilibrium because the gains and the losses are not equal to each other. And at that point, the power will start to change, what you guys all saw when you manipulated the reactor power. So since you brought it up, does anybody remember, if we draw as a function of time, let's say the reactor power was cruising along, and right at the time is now, you withdrew a control rod. What happened when you guys did that? Anyone, because you all did it. It went up. OK. And then what? When you stopped withdrawing the control rod, did it level out? So everyone, tell me what happened. AUDIENCE: It slowed down. PROFESSOR: It slowed down the increase, but it didn't stop going up. Kind of freaky. So this is why I had you guys do that power ramp because just controlling a reactor is not as simple as, remove the control rod, you remove a certain amount of reactivity because there are time-dependent effects due to delayed neutrons, neutrons that aren't immediately released after fission that can have a large effect on how you control your reactor. And then if you wanted to decrease it again, let's say you put the control rod back into its original position, the power would not come back to its original position. But then, eventually, it would start to coast down and probably go beneath its original position at which point you have to constantly be controlling those control rods to keep it in what I'll call dynamic equilibrium. You never really hit static equilibrium unless it's off. As I went to a seminar a couple weeks ago and said, I don't study biological organisms in static equilibrium because that's better known as a dead organism. They're not very interesting. But dynamic equilibrium sure is, for them and for us. So with this process of getting the single-group balance equations, I'd like to generalize this to the two-group balance equations. And this is something you can actually use. In every case, we're going to say, let's put our gains on the left and put our losses on the right, if we want to have this reactor in equilibrium. And now we'll separate our equations into the fast and thermal regions of neutron energy. So we'll call those f, and we'll call the thermal ones th. So using this model of our neutron diffusion equation, what are the gains of neutrons into the fast spectrum? AUDIENCE: Straight from fission. PROFESSOR: Yeah, straight from fission. So how do we write that? This process that we're going through now, this is where recitation really begins because this is how I want to show you guys how to approach a problem, let's say, a one-sentence statement like, give me the flux anywhere in a two-group reactor. This is how we go about it. So how do you equationally put the neutron gains from fission? What terms do we have up there right now? AUDIENCE: Your neutron multiplication factor and your cross section. PROFESSOR: Yep. Yep. You'll have your nu, your neutron multiplication factor. And now, we're actually going to split every cross section into its fast and thermal energy ranges because now we're actually splitting that energy, like we did when I drew that crazy cross section. Let's see, we had log of e versus log of sigma, and they all follow roughly that formula. And we split it and said, if we want to draw an average cross section, it would look something like this. And that would be our sigma thermal and this would be our sigma fast. So that's what we're doing here. So now it gets a little more complicated because both fast and thermal neutrons can contribute to fission. So how do we write this in terms of equations? AUDIENCE: [INAUDIBLE] PROFESSOR: We only want the neutrons that are born into the fast region, the fast gains. That doesn't mean you don't have to consider where are the thermal neutrons, because it's mostly those thermal neutrons that, when they get absorbed and make fission, create fast neutrons. So what we'd really need is sigma fission fast times our fast flux because we're going to split every variable into its fast and thermal parts, plus-- let's put a parentheses there-- sigma fission thermal phi thermal. So do you guys see what I've done here? We're assuming that every neutron is born in the fast group, where we're cutting this off at around 1 ev. And we are assuming that no neutrons are born below 1 ev, which is a very good assumption. So in this case, both the fast and the thermal fluxes contribute to creating fast neutrons. Is there any other source of fast neutrons? Good, because I don't know of one either. OK what about losses? By what mechanisms can neutrons leave the fast group? Yeah? AUDIENCE: Aren't they absorbed? PROFESSOR: Yeah. They can be absorbed. So how do I write that? AUDIENCE: Sigma af-- PROFESSOR: af-- AUDIENCE: --times the flux fast. PROFESSOR: Times the fast flux. So only neutrons in the fast flux group will leave the fast flux group by absorption. And what's the other mechanism that we had in our neutron diffusion equation? AUDIENCE: Scattering. PROFESSOR: Yeah, actually, so that's not in the diffusion equation, but you are right. That's the missing piece that is going to be the hard part, so. Let's add that in now. So there's going to be some scattering from the fast to the thermal group, times our fast flux. So not every scattering event will cause the neutron to leave the fast group, but some of them will. So we have to figure out, what is the proportion of those neutrons that will scatter from the fast group to the thermal group? For the case of hydrogen, it's pretty easy because the probability of a neutron landing anywhere from zero to e, starting off at energy ei, if we had our scattering kernel, is a constant. So that's not too hard. And then last, what other way can we lose neutrons from the fast group? AUDIENCE: Leakage. PROFESSOR: Yep. Leakage. They can leave the reactor, and we can write that as a d fast bg squared flux. Make sure everything has bars that needs them. OK. Now, using the same sort of logic, let's-- Yeah, Luke? AUDIENCE: What's the bg? How is that different from [INAUDIBLE]?? PROFESSOR: It's not. It's the same. It is the same bg that describes the geometry of the reactor. AUDIENCE: I guess what's the subscript b of g? PROFESSOR: G means geometry. Yep. And you had a question? AUDIENCE: Yeah, just in the last flux, [INAUDIBLE] fast flux. PROFESSOR: Yes, thank you. That is a fast flux. Yep. But it's important to note that this flux right here is not a fast flux. We'll get back to that soon. Now, using the same sort of logic, let's write the gains and losses in the thermal group. So what is the only source of neutrons into the thermal energy group? I want to hear from someone who hasn't said anything yet. So Jared, what would you say? Either Jared because I haven't heard from either of you. AUDIENCE: [INAUDIBLE] PROFESSOR: You did. OK, then you. I'm sorry. All right. Yeah, you said the no power thing. Thank you. AUDIENCE: So could you, like if something is absorbed in the fast spectrum, jump down to the thermal? PROFESSOR: Close. I want to replace one word in what you said. If something is blank in the fast spectrum, it goes down to the thermal spectrum. AUDIENCE: Scattered. PROFESSOR: Yes, scattered. Every neutron that leaves the fast group by scattering enters the thermal group also by scattering. And in this case, we want to have the fast flux appear here because the number of neutrons entering the thermal group depends on how many scatter out of the fast group. Yeah, Luke? AUDIENCE: Would you ever scatter up into the fast group? PROFESSOR: You'll see. Yes. Yeah, great. I just gave something away. Yes. You can, but no, you usually don't. So we would consider that once neutrons enter the thermal group, they're at thermal equilibrium with the stuff around them, and up scattering is rarely a possibility. You'll see. Yeah, quite soon actually. Don't worry. Not like quiz you'll see, but you'll see. Yeah. I've already got some stuff planned out. It's going to be a part of the homework question. So now what loss mechanisms do we have? AUDIENCE: Leakage. PROFESSOR: Yeah, leakage. So we're going to have some separate thermal diffusion coefficient because that diffusion coefficient depends on the cross sections which depends on the groups you're in, times the same geometry, times phi thermal. And what's the only other mechanism of loss? AUDIENCE: Absorption. PROFESSOR: Absorption. We've got to clear a. Why is there no scattering from the thermal group? AUDIENCE: Didn't you say it was very rare to have it scatter up to the fast group? PROFESSOR: I'd say even simpler. Once you're at the bottom, there's no more lower you can go. So in neutronics, when you hit the bottom, you don't say, throw me a shovel. You say, you're at the final energy group. So now, what we'd like to be able to do is, last thing we want to stick in is our k effective, our criticality, because in reality, this is kind of what we want to know in terms of the geometry and the materials in the reactor. So if we know what we make it out of and how big to make it, we should be able to get those in balance such that k effective equals 1. So the only really unknown here besides the flux unknowns is k. And the reason I don't care about the flux unknowns is, they're going to go away soon. Yeah? AUDIENCE: Does the thermal also have the [INAUDIBLE] over k? PROFESSOR: Absolutely, because the k effective is on the bottom of the total original sources of neutrons. Just like, let's see. That was one group. Yeah. So I'd say right now, this accounts for the production of all neutrons, and everything else down the chain is losses. Yeah, Monica? AUDIENCE: Do we assume that all neutrons [INAUDIBLE]?? PROFESSOR: We know, experimentally, that they tend to be born between 1 and 10 mev, but since you asked, let's escalate the problem. And then we will de-escalate very quickly, just to say, let's do a thought experiment, right? Let's say some of the neutrons were born thermal. What would we have to add to this expression? There's one variable missing that's not anywhere on these boards, but was there on Thursday and Tuesday. AUDIENCE: Spectrum? PROFESSOR: That's right. Chi, the birth spectrum. So if some neutrons are born thermal, then we would have to add a Chi fast here, and we would have to add a Chi thermal to say, this is the proportion of neutrons born fast or thermal, times nu times sigma fission fast, phi fast plus sigma fission thermal, phi thermal. And I'm not writing nice because I'm just going to erase it in a second, but to go with your thought experiment, this is what it would look like if some of the neutrons were born thermal. Perfectly fine thing to model. Doesn't happen much in real life, but great exam question for next year. Thank you. AUDIENCE: Next year. PROFESSOR: Next year. I'm not going to give you your own exam question. That's just too easy for you. So for now, let's forget about that stuff and stick with the most realistic situation. Ah, running out of room already. OK. That's for next week. So let's forget about that. What do we do next? We have two equations and three unknowns. Interesting. Or do we really? Well, for one thing, if we can get that top equation all in terms of one of the fluxes, either fast or thermal, then every term is in terms of a flux and they can all be divided out. So let's take one of these equations and substitute in so that we get everything in terms of only one flux. So let's say, the top one, which has got the k in it, has one instance of phi thermal. So let's isolate phi thermal in terms of everything else. So we have that thermal equation right there. So we have sigma scattering from fast to thermal times fast flux equals two things times phi thermal, which is d thermal bg squared plus sigma absorption thermal. We're actually not that far away. So all we do is, we divide each side-- where's my simplifying color-- substitute. That's not it. Rearrange. Divide everything by this stuff, and those cancel out. And we're left with an expression for phi thermal which we can now plug into that top equation. So we're like one step away from the final answer. There, everything's still visible. And so now we end up with 1 over k times nu sigma fission fast, fast, plus sigma fission thermal times this expression, sigma scattering fast to thermal, phi flux over d thermal. I don't usually spend this much of the class with my back to you, but this is pretty mathematically intense, so I apologize for that. And equals sigma absorption fast, fast flux, plus d fast bg squared fast flux. That's not a bar. That has one. That has one. That does. That's good. OK. Now every single term here is in terms of fast flux. So we can just cancel them from every single term here. And now we're left with an expression for k effective that's just in terms of material properties and geometry for the two-group problem. We're only one step away. So if we multiply everything by k and divide everything by this stuff, we'll just have a sigma absorption plus d fast bg squared. That would equal k. And just like that is the criticality condition for a two-energy group homogeneous reactor of any geometry. All that matters to define the geometry is, what's this bg squared? So this case works for an infinite slab reactor. It works for an actual right cylindrical reactor. You just have to sell for or look up the correct buckling term, this bg squared, which I'll tell you now, we refer to as buckling or geometric buckling, and you've got the solution to this. Let's just check to see what we actually have here. We have nu material property, material property. All of those are material properties except for the bg's. So this tells you how to design a reactor, physically, and in terms of which materials to make sure that it's critical. And if we look at what this looks like here, again, it's a ratio of gains to losses because eventually, the losses right here, these are the losses from the fast group. These are the losses from the thermal group. These are the gains in the fast group, noting that some of the neutrons born in the fast group scatter out of the thermal group, but don't leave the reactor. So again, it turns out to a gains over losses ratio. And there you have it. So I want to stop at 10 of-- Yeah. AUDIENCE: Did we drop the scattering term from the fast equation? PROFESSOR: It should be-- did we? Yeah. Let's stick it in right here. So we'll just also stick it here. There. And that flux goes away because it was in terms of everything. Yeah. There we go. Thank you. OK. But again, this represents losses on the bottom, gains on the top, just like any other k effective. So I wanted to stop here at 10 of, 5 of, and answer any and all questions you guys have about going from the neutron transport equation all the way to something that you could solve, and then start to play around with to say, what happens if I switch isotopes? What happens if I raise the temperature? What happens if a chunk falls off of the reactor and it gets smaller? Yeah. AUDIENCE: We got the equation for [INAUDIBLE].. PROFESSOR: Yes. AUDIENCE: [INAUDIBLE]? PROFESSOR: Somewhat. We can assume that for considerably long enough times, and to a neutron, a long time could be like seconds, that the time and the spatial form of the flux are separable which is something that we'll talk about on Tuesday. But, if you remember, one of the major assumptions we made in the neutron transport equation was steady state. We got rid of any transient effects. We'll bring them back, now that we have a way simpler case, on Tuesday. Yeah, Luke? AUDIENCE: [INAUDIBLE] step, the plus scattering-- PROFESSOR: From fast to thermal. AUDIENCE: Is that also supplied by the sigma [INAUDIBLE]?? PROFESSOR: Where is that going? AUDIENCE: It must be in the denominator, right, because it was over on the right side? PROFESSOR: Let's see. Oh, yeah, we divided by all the stuff on the right side, didn't we? OK. So that shouldn't be there. But it should be there because we divided by everything on the right side. Let's just check that really carefully. So it should have been-- no, that's the thermal one. So we're not worrying about that. Yep. So it would just end up here. Yeah. Good point. Cool. Let's talk a little bit about what I'd want you guys to be able to do with this. So what would I want you to be able to do on the homework and on an exam? With the neutron transport equation, recite it from memory. Well, not really. But if I were to give you the neutron transport equation, I'd maybe want you to explain what some of the terms mean, or tell me how you would get the data, or explain one of the simplification steps and justify why you think it's OK because we actually wrote out the justification for every step on the board. Or explain, for example, what's the physical reason that we can solve the neutron transport equation with this diffusion approximation? And in which regions does that approximation break down? So can anyone tell me, from yesterday, where is the diffusion equation a bad approximation of the flux? Yep. AUDIENCE: Near the control rods or the fuel. PROFESSOR: Near the control rods or the fuel, or anywhere else where cross sections change all of a sudden because diffusion describes long distance steady state solutions across places, and where things change drastically, diffusion breaks down. Because we assumed here that the neutrons behaved like an ideal gas or some chemical species with no neutron to neutron interactions, because the mean free path length for those interactions is like, what did we say, 10 to the 8th centimeters, so a megameter? Yeah. I love using those sorts of terminologies. 1,000 kilometers before a neutron would hit another neutron. Or I might ask you to, let's say, reduce the neutron diffusion equation and come up with a simple criticality condition. Or let's say, if you were to make a physical change to the reactor, tell me if you think it would go more or less critical, and what would happen next? Or I could give you a different physical situation, like the up scattering scenario, which I will, and ask you to pose and maybe solve these equations, or at least get forms of the criticality condition. I'm not going to ask you to get tons of flux equations because that's all 22.05 is about, is doing this sort of neutron physics. But I want to make sure that you walk into that class prepared. Plus we've been kind of heavy on the-- you know, this class, the name of it is Intro to Nuclear Engineering and Ionizing Radiation. And so far, we've been pretty heavy on the ionizing radiation and physics. So this is where the engineering comes in. Assuming you have some material properties, you can now pick them to create a reactor in perfect equilibrium. Yeah, Kristin? No? So did anyone else have any questions about the material or about what I might ask you to do with it? Yeah. AUDIENCE: You said this equation would hold for any geometry just based on [INAUDIBLE] neutrons. PROFESSOR: Yep. So all that you would do differently is, right here, when we had that Laplacian operator, we took the one-dimensional case of an infinite reactor in finite and one dimension, which meant the Laplacian operator is just double derivative of x. But you could pose the equation in cylindrical coordinates and say, well, let's say now you had an infinite cylinder reactor, you wouldn't necessarily have sines and cosines that would satisfy this relation. Anyone happen to know what you'd have? The sines and cosines of the cylindrical world, called Bessel functions. So these are the sorts of, in cylindrical geometry equations, that behave similar to sines and cosines with kind of regular routes and that you can describe in a similar way. But I'm not going to get you guys into that. I'll just say, OK, there exists solutions that you can look up in the cylindrical case. And I would not make you derive them by hand because, what's the point? Again I'm not here to drill your-- can you do the same math over and over again? I want to make sure that you can intuitively understand, what's a k effective? In a sentence, it's gains over losses. What happens when you push that out of equilibrium? Or what physical situations could push that out of equilibrium? So any other questions for you guys? Yeah. AUDIENCE: Just curious on the cylindrical graph we have there, what would the graph in flux look like? PROFESSOR: It would look pretty similar. In r, it would kind of come down like that. It would always be symmetric about the center for symmetry arguments, and in z, it would look kind of like that. And so actually in the end, the form of flux in r and z comes out as the first Bessel function. Let's say, that's a times the-- what would you call it -- times cosine of this distance is z, so pi is z over-- I'm going to have to add a subscript. And there'd be some constant a in there. So what you can assume for multi-dimensional reactors is that the dimensions are separable. So the r part is solved separately from the z part. And that comes right from the Laplacian operator right here. If you assume that some flux in r and z can be written like the r part as a function of r times the z part as a function of z, then the solution gets a lot easier to deal with. But this is not something I'd ask you to do in any coordinates but Cartesian because those are more intuitive, and you'll get plenty of the other stuff later. AUDIENCE: What's the name of those functions again? PROFESSOR: They're called Bessel functions. So if you want to look up, there's a little bit about them in the reading, but it's one of the more advanced topics that I'm not going to have you guys responsible for. Much rather it be you'd be able to tell me what happens if you compress the reactor or raise its temperature, or pull out a control rod, or raise the pumping speed and cool down the water, or something like that. So there'll be plenty of those kinds of questions on the homework to help reinforce your intuition, as well as some of the noodle scratches will be developing a criticality reaction or equation for a more complex system than the one I've just shown you here, but using the same methodology and the same ideas. |
MIT_2201_Introduction_to_Nuclear_Engineering_and_Ionizing_Radiation_Fall_2016 | 2_Radiation_Utilizing_Technology.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL SHORT: I wanted to give you guys a survey of radiation utilizing technology and tell you a little bit about the way this department has changed the way it teaches. It used to be in our department and probably everywhere else around the country that first we teach you the theory of how things go down and understand them, and then we can teach you the context in which they're placed. This resulted in a rather boring curriculum, in my opinion, having been one of the ones that went through this actual curriculum. So for those who don't know, I was an undergrad in this department. And while I learned a lot of great things from folks, I also kept some mental notes on what I would do differently, and now's the chance to do this. So instead, we've adopted a context first theory second approach, which means we tell you where we're going to show you why should you pay attention to the rest of the semester. Then we fill out the rest of the theory and fill in the gaps and then revisit the context to show you what you've learned and how you can understand it. That's why today is going to be a rather light class. So don't take any notes, just listen, enjoy, ask questions. And I'm going to show you some of the applications where we use radiation and the principles of NSE and technology today. And for recitation today since we haven't gone over any technical material, we're going to be heading to my lab to demonstrate one of these things, a sputter coder which is a controlled system for radiation damage which applies one material to the other via the process of sputtering or actual nuclear or ionic collisions that blast material, in this case, gold, onto whatever you want to coat, which in this case is a pile of pocket change. So we're going to make some gold change today. So the motivation today is really to get over two questions is how can radiation be used for benefit and what is the physics behind how it can be used? I'll be using a zircaloy fuel rod, the same kinds that you'd see in a nuclear reactor as the pointer because I'm not a fan of lasers. And this is actually incredibly light. I can hold it out at the end, not much visible shaking and I wanted you guys to see and feel and try and bend what zircaloy actually is like. That's a piece the same diameter and dimensions and stuff that you would see in a nuclear reactor. So give it a slight bend. If you try really hard, you can bend it. But notice how light it is. Notice how strong it is. It's about midway in density between stainless steel and aluminum, but it's a hell of a lot stronger than aluminum. It also has the added benefit that it basically doesn't absorb neutrons. The real reason we use zircaloys, zirconium alloys and reactors is that they have very low interaction probabilities or cross-sections. And for those who don't know what those are, in a couple of weeks we'll be defining what a cross-section is. This stuff's pretty cool. Also what makes this nuclear grade zircaloy and not just regular zirconium is that there's no hafnium in it. Hafnium is chemically very similar to zirconium, it also happens to be one of the highest cross-section elements that there is. So you hide the stuff that you need to be neutronically transparent in reactors. It happens to be found in the ground with the last thing you'd want in your fuel cladding. In fact, you can use hafnium as a control rod or control blade. So the difference in chemistry and in cost between nuclear zirconium what you've got there, and regular zirconium is the hafnium has been taken out by some very painful chemical separation. Usually you'd find about 3% or 4% hafnium. So there's all sorts of technologies that we use, the principles of nuclear science and engineering. I'm just going to say NSE from now on. You're all probably pretty familiar with power and maybe familiar with some of these other ones. Like medical isotopes are the backbone of a lot of imaging techniques to find and treat different maladies. Space is basically a giant maelstrom of radiation so you can both use it and have to shield from it. We'll go over some of the crazier ways of shielding from space radiation, actually today. Semiconductors. The way that the MIT reactor made about 60% of its operating budget until recently was by irradiating crystals of silicon single, crystal bools or ingots of silicon, to make what's called an n type semiconductor, via the following reaction if I can find an eraser. I'm going to use the shorthand that we talked about last time. Normally you would take silicon and you add a neutron and you end up forming phosphorus. I've not memorized the masses of the isotope, but in the end this is actually a neutron capture reaction, and then a resulting decay, which produces phosphorus, which is what's also known as an n type dopant. It's a sort of extra negative-- what is it? --a negative charge type dopant that changes the conductivity of the semiconductor. There's lots of different ways of doping semiconductors. One of the best and most uniform is to stick things in the reactor. So back when silicon ingots were smaller, like 4 or 6 inches in diameter, there was a constant train of these things traveling through the reactor getting irradiated and being sold and then cut up into wafers to make devices. And it used to be students or Europe jobs to load and unload those things from the reactor. Now, it's not that dangerous. As long as you use the right handling procedures, like put them on a cart, pull the cart at arm's length and push like that, your dose is very low. And that's because for a given point source emitter, the dose you receive drops off as 1 over r or the distance squared, which means that your arm's length, let's say your arm is about a meter, you can drop the dose by quite a bit compared to what's called the contact dose. Well, this poor fool held the silicon ingots up to their chests like that. They were OK. They got about 10 months of their yearly allowable radiation dose that won't induce any additional risk of cancer. But to ensure that they would not exceed that allowable dose, they became the administrative assistant for the reactor for the next 10 months. So their job at the reactor was to answer the phone, which is not a radioactive activity, including at the reactor. First nuclear power. The reason why I know a number of you guys are here is pretty simple. It's just a hot bucket of water. The way we make that bucket of water hot is by putting uranium or other fuel into these rods assembling a lot of them in a small space where they then heat up by producing nuclear fission, capturing the resultant kinetic energy of the fission products and neutrons and everything else that comes out and using it to either heat up or boil water, that's then just driven through a heat exchanger and a turbine. So aside from everything on this side, it looks basically like any other water cooled power plant. The difference is things get toasty in a radioactive sense, but also pretty well under control. And what's inside a reactor-- if you say this is the diagram of a typical Pressurized Water Reactor or PWR where the water is pressurized enough so as to stop boiling from occurring, keeping the water liquid which has a number of safety implications, you've got the core of the reactor right here-- and these things right here are called steam generators. It's nothing more than a heat exchanger that generates steam. --and the steam generated in here goes off somewhere else and drives the turbine. If you look inside this reactor you'll notice a lot of different fuel assemblies or fuel rods, including things like control rods or shut down rods, rods made of neutron absorbers like hafnium that we talked about, or gadolinium or boron 4 carbide or any other material with a high capture cross-section, meaning a high probability for capturing a neutron rather than letting it go from one fuel element to another to produce more fission, to produce more heat. That's effectively how a reactor works. So we went over a little bit about the fuel. The fission and the energetics is kind of cool. So let's say you start off with uranium, probably the fissile isotope 235, and you send in one neutron-- think it's 92. Don't quote me on that though. --and instead of undergoing some sort of a capture reaction or something else, it can split into what we call different fission products. And plus, anywhere between two or three neutrons the usually accepted number is an average of about 2.44 neutrons plus gamma rays, plus antineutrinos, plus other energy and some occasional other stuff. The main point here is these fission products-- if let's say you had a uranium nucleus and it were to split in half, fission products go in other directions, they carry with them quite a lot of kinetic energy. And what we'll be doing a lot in the second half of this course is watching to see how do these highly energetic nuclei or atoms-- when they slam into other atoms, how quickly do they lose energy? How far do they tend to go? These fission products tend to stop in the fuel. Their range is going to be on the order of nanometers. I don't even think it reaches microns. But the neutrons however, as we saw from looking at Chadwick's paper, they can go pretty far, usually on the order of around 10 centimeters in a reactor before they go do something else. So it might make it a few fuel rods over, and chances are get captured by another uranium nucleus making more fission, more neutrons, and some other fun stuff like gamma rays and neutrinos. Anyone not know what a neutrino is? So a neutrino is a very, very low mass but not massless as was found I think like last year; almost speed of light particle that's released as part of radioactive decay. They basically don't interact with matter, but once in a while they do. What that means is that they travel straight through everything. It's been estimated that trillions of neutrinos from space are passing through us per second and on average you won't get a single interaction during a day. In fact, to detect neutrinos they've had to fill old hollowed out salt mines with water and fill them with photo tubes in the hopes of catching two or three a day. What that means is if someone turns on a reactor somewhere anywhere in the world, it's releasing tons and tons of neutrinos and if all of a sudden you start to see two or three coming from the same place, that's a rare event. That's something with some statistical significance. And there's been projects in our department using neutrino detectors to try and detect where reactors are turning on anywhere in the world. So we've been able to sense that the MIT reactor is next door from the building next door. I don't know how well this is going to work when you get to farther distances. But the physics is pretty much there. It's an engineering problem to figure out, well, how do you detect enough neutrinos to get a statistically significant signal? That, far as I know, has not been solved. Hopefully, by this time next year it will. There's also control rods-- rods filled with absorbers, like I mentioned before. If you want to stop this nuclear reaction, you send in something like hafnium or gadolinium or boron. So let's say, Boron-11, like Chadwick knew, would be able to capture a neutron. And then it would turn into-- what's the next one over? Well, sorry. That's a one and a zero. That's a five. That should turn into carbon-12. And then you've captured the neutron instead of letting it get into more uranium and cause additional fissions. So when something's going wrong, or you want to control the power level in the reactor, you insert control rods. They soak up the neutrons, and make the reactor go subcritical. And we'll go over what all these words mean in due time-- various points in the course. There's also coolant and what's called moderation. Does anyone know what I mean by moderation of neutrons? Yeah? What do you mean? AUDIENCE: It thermalizes them. MICHAEL SHORT: It thermalizes them. And in other words, it slows them down. Because the probability that each uranium nucleus can capture a neutron depends on the energy of the neutron. The cross sections for interaction, which we give the symbol sigma for the microscopic or sort of mass independent cross section, they're functions of energy. They're extremely strong functions of energy, over the energy ranges we're interested in. Because we're interested in an extremely large energy range. These neutrons tend to be born at around 1 to 10 MeV, or Megaelectron Volts. And by the time they thermalize, like you said, or reach roughly room temperature, kinetic energy is of about 2,200 meters per second, they can be-- what is it-- a 40th or 0.025 eV, fraction of an electron volt. So we're interested in nine orders of magnitude of energy. And the cross sections vary wildly over these nine orders of magnitude. And I'll show you what some of these look like pretty soon. And in this case, in the case of light water reactors, like the PWR we saw, the coolant and the moderator are basically the same thing. You guys remember how when Chadwick put the paraffin in front of the neutron source, he started to see more ionizations. That's because the paraffin is a great source of hydrogen. So is water. Water is an ideal coolant because it takes a lot of energy to heat it up, and a lot to boil it. So you can store a lot of energy with less of a temperature difference in water. And it's full of hydrogen. And kinematically, it's easier for something the size of a neutron and the mass of a neutron to slow a neutron down. Because a neutron hitting a proton can transfer up to all of its energy ballistically. Then that proton won't move very far because it's also got charge on it. If a neutron hits something heavier, like stainless steel or other stuff in the reactor, it cannot, by conservation of energy and momentum, transfer all of its energy. That fraction is actually pretty small. So you'll see. We'll actually calculate what that fraction is. But it drops off pretty precipitously as you start to get heavier than hydrogen. And finally, there's reflection and shielding. We'll get into shielding in terms of how much stuff and how much matter does it take to stop radiation from getting through. In some cases, you can stop it all. In some cases, like gammas, you technically never can. You'll just get what's called attenuation or continuous removal of gamma rays. But chances are, you can't remove every single photon from getting out. It's only a matter of how much do you need it to get down to. And there's a neat aside. Who here has looked down into a nuclear reactor before? Three of you. Wow. Four. OK. What did you see? AUDIENCE: Not that much. MICHAEL SHORT: Not that much. This is a particularly powerful reactor known as the Advanced Test Reactor, or ATR, at the Idaho National Laboratory. You won't see any others that look like this. One, because these crazy-shaped fuel elements are not that easy to make. This is a test or a research reactor where things get irradiated. It's about 125 megawatts. And the blue light being produced is called Cherenkov radiation. It's from electrons and things moving, or beta particles, electrons, moving faster than the speed of light in water. Now, as you know, you can't exceed the speed of light in a vacuum. But things can move through other media faster than the speed of light in that medium, effectively producing optical shock waves given off as little blue cones of light for each particle. So when folks say, oh, am I going to go green when I get near radiation? You can say, no, you'll glow blue. They've just got the wrong color on all the TV shows. And then onto fusion energy. Since most of us tend to talk about fission a lot of the time, but how many of you here are interested in going into fusion? Usually, it's at least half the class. And so I figured this used to be a fairly fission-centric teaching style in the department. And I think fusion deserves equal time. Because about an equal number of you want to go into fusion to make it a reality. These reactors are laid out fairly differently. What they'll be is a big, hollow vacuum chamber that's shaped like a donut or a torus, and lots of magnets to confine a plasma or sort of a charged mess of separated ions and electrons that whirls around in millions of meters per second. Once in a while, these ions and electrons, or especially these nuclei, will collide with each other and undergo a fusion reaction, or one of a few fusion reactions that I've written up here for you. So in this case, there's no elements with a symbol d or t. We're just using those to refer to deuterium or tritium as a visual aid. But you should know that they're deuterium and tritium from their atomic numbers. One, which means it's an isotope of hydrogen. And their mass numbers, two and three, which is not the mass number for normal hydrogen. And in this case, when you fuse deuterium and tritium, you can produce helium and another neutron. And so then those neutrons can be used to hit lithium, which they'll usually have in what's called a breeder blanket around the outside, which releases more tritium. So fusion reactors actually can produce their own fuel. The trick is they're radioactive gases, so containing them can be kind of tricky. You also need a way to get the helium out of the reactor. But we have one of these on campus. We have one of the only three in the country. It's called the Alcator C-Mod. Have any of you guys seen a tour of this place yet? Almost all of you. So for ever who hasn't, do it this year. Because this may be the last year of Alcator C-Mod's operation. That's not to say there won't be the next fusion device on campus, but there's one here right now. And it might be a while before the next one's built. So if you haven't seen it yet, go and see it this semester, definitely. The reason why fission and fusion work from an energetic standpoint, is if we look at the binding energy per nucleon-- remember, last time we mentioned the binding energy is the difference in energy. If you were to take, let's say, a proton and a neutron from infinitely far away, and bring them together to create a nucleus of deuterium-- we'll call this D-- these two, the energy of the proton plus the energy of the neutron, the rest mass energies, rather, would be greater than the energy of just deuterium. And that little bit of mass that's changed is converted into energy. And this is what's known as the binding energy. If we look at the binding energy per nucleon or per proton or neutron, we can get a relative ranking of how tightly bound each nucleus is. So for the light isotopes, smashing them together should liberate excess binding energy-- or sorry, excess energy-- because you'll gain back some of that energy by the conversion of mass to energy. Same thing over on this side, just not as extreme of a gradient. If you were to split apart heavy nuclei, like uranium-235, you can release a little bit of energy in fission. And once you get up here to iron, you can't go either way, which is why, if you think about the biggest fusion reactors that exist in the universe-- anyone know what they are? AUDIENCE: Stars. MICHAEL SHORT: Stars, right. They tend to hit cores of iron before they either die out or go all gravity crazy and become black holes of supernovas, or whatever you will. This is kind of the energetic limit for normal nuclear processes. Or if they become a neutron star, then things get beyond the scope of this course. I won't be explaining neutron stars. There's a lot of medical uses of radiation. I don't know if any of you guys have seen these things. It's the only time I'll show a tricky looking biology diagram, because it's kind of interesting to note. These are what's called brachytherapy seeds, little seeds of isotopes that remit a certain type and energy of radiation selected for their applicability, that can be implanted in the body at the site of a tumor to deliver localized radiation treatment. You can either go in through existing ports on the human, and not having to drill or cut a hole in someone, or they can be implanted laparoscopically or surgically. So this way, if you don't want to subject someone to a whole body radiation dose or chemotherapy, or if you want to use it in conjunction with chemotherapy, you can implant a tiny little seed of a radioactive material in there to deliver a certain dose to a tumor, and then take it out. And that way you know very, very well what the dose is going to be, because you can measure the activity or the number of decays per second of that brachytherapy seed. And you know how it's going to change over time. Because you know the half-life of the particular isotope that you've looked at. There's also things like imaging. You can have someone ingest an isotope like technetium-99 metastable, to highlight certain organs or things in the body that you can then image later by their decay gamma rays or other phenomena. It's also one of those reasons why when you go in an airport, you have to tell them if you've had a medical imaging procedure. Because a lot of these places have radiation detectors. And if you are radioactive and don't identify yourself, you will quickly be identified and taken into the back room to the probulator, or whatever they're going to do at the airports. I don't know. I've never been searched I don't plan on that happening. There's also X-ray and proton therapy sending in well-known, well energy-characterized radiation to fry tumors or other things. In the case of X-rays, you're relying on what's called exponential attenuation. If you look at the distance into a material, and you look at the intensity of the X-rays-- say, at x equals zero, this is your X-ray source. This is your incoming intensity. It falls off exponentially with distance. You might then ask yourself, all right, if my tumor is this deep, and I apply that radiation dose to the tumor, what about the rest? What about the part of the body that the X-rays have to travel through in order to get to that site? Anyone know how you would deliver more X-rays to a tumor than the surrounding tissue? Anyone have any ideas? Yeah? AUDIENCE: Go from different angles so the rays intersect on the tumor. MICHAEL SHORT: Exactly. Go from different angles so the rays would intersect on the tumor. I'll have a better diagram, but I'll draw one for now. Let's say, that's the eyes and that's the tumor. You can wear this helmet where X-rays can come in from all different angles. And the X-ray emitter would have to come in from different angles, so that as all the rays intersect, this part gets fried the most, while keeping you from getting too much radiation to the rest of your brain and ceasing to function. There's also radio tracers. I think I already covered those. So imaging, we already showed an image of what this looks like. The first X-ray back in 1895 didn't have that good resolution, but it was kind of striking in that you could see the difference in contrast between bones and tissue. I should replace this with the X-ray of my foot that was my signing bonus at MIT. My first day on the job I went down to clean one of the old rooms in Northwest 13, which is now where my labs are. And I moved a bunch of boxes aside, and a 200-pound steel plate, jagged cut with plasma torch, went down and smashed down on the bones in my foot. And I had one of those temporary feats of superhuman strength and was able to lift it up. I went back to try to lift it up and couldn't move it an inch. I don't know how I got out of the plate. The next thing I remember, I was crawling up the stairs to go to the hospital. But I did get an X-ray, and they were able to sense that the pain in my foot was due to a hairline fracture. It was like a fracture in the bones that basically came back together. But the improvement in contrast resolution in X-rays is what differentiates the ability to see a hairline fracture from just the ability to see that you contain bones. And the reason for this, and we'll be looking at a lot of these curves in this course, is the differential absorption or attenuation of X-rays, or any photons of any energy through different types of matter. And so, for example, here we have the ICRU standardized average soft tissue attenuation, as well as bone. And you notice that there's a few differences in these curves. So also, there's some similarities. I'll note that these actually have the same access to the same units. What do you guys notice that's the same about these curves? How about the value? They're basically the same-- mass averaged with very little differences. If you look at where it hits the y-axis, about 3 times 10 to the 3rd, 3 times 10 to the 3rd. The curves follow basically the same shape. What's the differences? So Sean, what do you think? AUDIENCE: Oh. That little jagged [INAUDIBLE] out there. MICHAEL SHORT: These jagged edges right here. Anyone have any idea why? And these reasons go back to what you learned in high school in 8.02 in terms of atomic transitions, not nuclear. Anyone here remember the k lines or the l lines? Or what was it-- the-- which emission series were they called-- the different emission lines that you can get from emission or absorption spectra? It all has to do with allowable electron transitions. And notice the units here are in centimeters squared per gram. What's the main difference between soft tissue and bone? AUDIENCE: Density. MICHAEL SHORT: Say it loud enough so I can hear. AUDIENCE: Density. MICHAEL SHORT: Density. Bone tends to be a fair bit denser than soft tissue. So these are mass-- what is it-- mass normalized curves. But the fact is, if you have a bone that has a higher density, then you're going to end up with more absorption. In addition, you can use some of these features and differences to your advantage. Like, if you choose a photon with energy here, it might not be nearly as absorbing in soft tissue as it would in bone. So by selecting the mass of the thing you're trying to image which you don't control, and the energy of the photon which you can control, you can produce as much possible contrast as you can between two different things. Is everyone clear on how that could work? Cool. We'll be going over why the curves have these shapes, especially these jagged edges pretty soon. And like you said, this is how you irradiate a tumor with X-rays. Because you can't quite control the amount of dose to any one part unless you split it up into a whole bunch of different rays. Proton therapy is quite different. It's a newer technology. And it relies on very well-known and distinct ranges of charged particles to enter the body with very little damage, stop and do their damage in the tumor, and not come out the other side. They just require significantly more expensive hardware. There's one of these at Mass General Hospital, or MGH, down the road. It consists of a cyclotron or a particle accelerator, which injects and speeds up protons so that they're moving very fast, then sends it in a beam through a bunch of bending magnets and up to deliver the protons to the patient. The way this works is you start injecting the beam. And as it goes through these two magnets, or what's called dees, every time it moves through the magnet, it's a charged particle and a magnetic field. It has a fixed curvature. But every time it's accelerated through this electric field it speeds up, so the curvature gets greater and greater and greater. And it spins outwards in a spiral until they exit the beam. And by deciding how long they get to spin, you get to choose the energy of the protons. Why does proton therapy work? This has to do with a difference in interaction between charged particles and photons, which have no charge. Charged particles will lose their energy in a very well-known way, what's called the stopping power formula, until they actually stop in the matter that they are going through. Photons either scatter or attenuate, or they don't. And you can't stop them all. So I want to run a quick Monte Carlo simulation for you guys, and show you what protons stopping in matter looks like. So this is a program called SRIM, or the Stopping Range of Ions and Matter. It uses the formulas that we'll be deriving and developing in this class to calculate the trajectory of protons in anything. So let's say, you are made basically of water. So let's say, you consist of hydrogen and oxygen in a stoichiometric ratio of two to one. I think water approximates humans pretty well. So we'll find out what the range of these actual protons is in humans. So what we do know is that it's a proton accelerator. And I know that the MGH accelerator has an energy of 250 MeV, or 250,000 kiloelectron volts. And finally, we decide how thick is the person? So how thick is a person, typically? How many-- what units do we get? How many centimeters thick is the average person? AUDIENCE: Forward or from the side? MICHAEL SHORT: Let's go the shortest distance in, so front to back. Maybe 10? Right? 10 centimeters? Not that much? Can it get halfway through you? Only has to go halfway, because you can always lie in your stomach. All right. Let's go 10 centimeters. Most of the protons go screaming right through you. You notice they don't actually stop in the person. So you don't tend to irradiate people with 250 MeV protons directly. You'll actually slow down their energy to something a little more reasonable, maybe 50 MeV. And then you can actually watch each of these charged particle tracks being computed. As it hits, let's say, imaginary nuclei or electrons, the paths will be slightly deflected. But what's really striking is they all tend to stop at about the same place. That's the really cool thing about charged particle interactions, is if you know the charge, you know the nucleus, you know the energy, you can calculate the range to within a very narrow margin. And what this is doing is just flying. Looks like it's done 70 ions so far. And it will keep on flying them until either you hit the end-- let's say, we set it to do 100,000 atoms-- or you just tell it to stop. Also, when you don't have to draw the lines, it goes way faster. So let's let it get maybe 300 or 400 ions, and we'll show you what the average range of the protons looks like. How far do they go before they stop? If we look at the ion distribution it's pretty striking. All of the protons, except it looks like one of them, stopped at a very fixed depth of 41.9 millimeters with very little straggle, maybe 0.6 millimeters on either side. So depending on the depth of the-- you can even get a deep, very shallow, very small tumor if you get the distance just right and the proton energy just right. This is why proton therapy centers are popping up all over the world. This is a much more effective, though expensive, treatment for certain types of tumors. At the same time, since we're nuclear engineers, we may be concerned with the amount of radiation damage being done to different materials. And so this is kind of a measure of how much energy the protons are losing as they travel through. Notice, it's not zero. As soon as the protons enter the person, they start to scatter around, undergoing some different interactions. But they mostly don't lose much energy until they reach almost their target depth. And what's called the stopping power is very low at high energies, very high at low energies, which means once they get slow enough, they almost all stop right there at what's called the Bragg peak. And that's the basis behind proton therapy. And you'll be able to understand why every feature of this curve looks the way it does by the end of this course-- probably by the end of this month. So let me stop that simulation because we really could go on forever, but we won't. Then the question is, what do you do if the tumor is too big? If the tumor is larger than that straggling, you actually have to sweep the energy of the proton beam. So you can vary the energy continuously in what's called intensity-modulated radiation therapy, where you change the energy of the proton, sweep it back and forth across the tumor to cover the whole thing. So you can sweep out in 3D space the size of whatever you want to die, without affecting the stuff that you don't want to die. So in this case, let's say, you'll apply protons of a certain energy for some point, and then another energy, and then another energy. And you can maximize the dose to a pretty flat level, while minimizing the rest of the dose to the patient. So even while changing energies, the most dose is done to the tumor, and as little as possible is done to the rest of the person. We already talked about brachytherapy, but we didn't say why it works. This is the first major topic we'll be talking about in this course. It relies on natural radioactive decay. And for natural radioactive decay, you need to understand decay diagrams, which are energy level diagrams of which isotopes turn into which others, by which methods, and how much energy they release in each type of decay. So for example, the common one is iridium-192, a pretty biocompatible isotope because it's, well, it's like a noble metal. And iridium-192 can decay by one of three pathways and become platinum-192, gaining a proton. Gaining a proton-- what has to happen for that to be conserved? So let's think about this. Let's say we have platinum-192, which decays naturally into iridium-192. I can tell you, because we've drawn the diagram to the right, it's going up one atomic number. So let's just say that it had n protons, and it now has n plus 1. How do we balance this nuclear reaction? What are we missing? Yep? AUDIENCE: [? Can the ?] [? neutron ?] [? turn into ?] a [? proton? ?] MICHAEL SHORT: Can the neu-- OK. So there's a neutron somewhere in this nucleus that turns into a proton. What are the three-- what are the things that we have to conserve in any nuclear reaction? Yep? AUDIENCE: Just a question. Doesn't it go from iridium to platinum, not platinum to iridium? MICHAEL SHORT: Yes. Thank you. I got that backwards. But the numbers are right. The symbols are wrong. Something else I'll mention about this class. Please do stay on your toes to correct silly things like that. I don't do scripts because you didn't come here to see me read off a piece of paper. Everything's live. All the derivations are going to be live because its more interesting. It's certainly more interesting for me to teach, so thank you for catching that. And please do stay on your toes if you something silly, especially if it's just not the same as on the screen. So like Luke said, we made a proton, or a neutron turned into a proton. What's not conserved in this reaction? Yep? AUDIENCE: Charge. MICHAEL SHORT: Charge. How do we balance that? Well, let's add some other particles. There's got to be some sort of radioactive decay. So what are our choices of particles? Yep? AUDIENCE: An electron. MICHAEL SHORT: Sure. An electron. Or more specifically, we'll call it a beta particle. Just like a gamma ray is a photon that originates in the nucleus, a beta particle is an electron that originates in a nucleus. You can't tell it's a beta particle just by looking at it. An electron is an electron. The only way you'd know is either by its energy, or by because they're another source of electrons nearby. So in this case, we get beta decay. This looks fairly balanced. One thing that I'll put in is beta decay is accompanied by an antineutrino, but I did not expect anyone to know that. I just wanted to make sure it's up there for completeness. So what we're relying on is the movement of these electrons, which are high charge and low mass, which means they're very low range. Which means when you implant a brachytherapy seed into a person, the irradiation volume is only as large as the energy of that beta particle will allow. The maximum energies for these beta particles are given by the differences between the starting and the ending energy. The way these diagrams are constructed is your ending energy is usually at an energy of zero, which we refer to as the ground state of that isotope. And all of these are relative energies in MeV, or megaelectron volts. So for example, this iridium-192 has a 40% chance of decaying by beta to platinum-192, which means the electron can have up to 1.4597 MeV. And if we know its energy, we know its maximum range. So selecting the right isotope and the right activity for the right tumor is quite important. Notice that there's also other ways in which this thing can decay. It might release a beta particle of a lower energy and reach what's called an excited state of platinum-192, which can then decay by just giving off this extra 612 keV of energy to the ground state. So let's write that nuclear reaction. Let's say we have platinum-192, and I'll put a star to mention that it's excited, becomes platinum-192. Where did the energy go? AUDIENCE: Gamma ray. MICHAEL SHORT: Gamma. So why do you say a gamma ray? AUDIENCE: Uh, because that just seems to me like the biggest source of energy that's released in a reaction like this? MICHAEL SHORT: So you said it's because it's the biggest source of energy that could be released? AUDIENCE: Well, it seems to me, yeah, like, intuitively that would make sense. MICHAEL SHORT: OK. What do you think? AUDIENCE: Isn't it a thing when an electron loses energy or drops an energy level to release the proton? MICHAEL SHORT: Indeed. If an electron drops down in energy levels, you'll have released an X-ray or a photon. But that's not a gamma ray. It's not coming from the nucleus. Yep? AUDIENCE: [INAUDIBLE]. Doesn't it have to be a gamma ray because of-- like, that's the only way it can conserve mass [? momentum? ?] MICHAEL SHORT: Exactly. So the question with this is, what do we have to conserve? Mass momentum energy charge. If we have platinum-192 go into platinum-192, the mass is pretty much the same. Yep. Question, Luke? AUDIENCE: What does it mean to put platinum in the excited [INAUDIBLE]? MICHAEL SHORT: It means it's at a higher energy nuclear state. It means that there is excess energy in this nucleus. So the difference between ground state or whatever of iridium-192 and the ground state of platinum-192 is 1.4597 MeV. Notice I'm not rounding. Don't round. And we can end up with a beta particle that doesn't quite release all that energy, leaving some in the nucleus in what's called an excited state. It's analogous to if you have, let's say, an atom of a, whatever that happens to be. And since you started talking about different electron energy levels, maybe this atom is helium. And it only has two electrons. And one of them gained some energy becoming excited up to the next energy level. Same thing, but on the nuclear level, these excitations are not in the eV range, they're in the MeV range. But you can think of it as a precisely analogous process for the time being. There are excited nuclear energy levels, and they can also decay by photon emission-- in this case, gamma emission because the masses are basically the same. Remember that the rest mass energies might be slightly different, but the charge is the same. There's no real change in momentum because this is a nucleus that started at rest. And this way the energy can be conserved. Yeah, Sean? AUDIENCE: [INAUDIBLE] in different cases, if they're excited, can they just go through another decay process? MICHAEL SHORT: Absolutely. So there are multiple isomeric transitions or gamma rays. So let's chart one of the paths through here. There's a 14% chance that iridium decays to this excited state, and it can then decay by gamma ray to another excited state, and then decay to ground. So there are lots of different possible pathways. I've chosen a particularly simple isotope because it fits on the slide. In your homework, you're going to get to look at the decay diagram for plutonium-239. There are not enough pixels in this projector to show the full complexity of that. So you'll have to zoom in a little bit. But I'm not going to ask you to do anything with it except for look at the three most likely transitions out of dozens, maybe scores, who knows? You guys will see. So that's a good question. Yeah. It can decay from an excited state, to an excited state, to an excited state, to an excited state, to an excited state and so on, until it reaches the ground state. AUDIENCE: But does it lose its energy, like, not by going to ground state, but by decaying in some more fission products? MICHAEL SHORT: It wouldn't be fission products, but everything else you said is, yes, it can continue to lose energy by continuously undergoing radioactive decay. And we're going to go some of this when we explore the early origins of the universe to say, if you just started off with a soup of protons and other things, you'd start to form all the isotopes possible, and the shortest half-life ones would then decay-- successive decays, maybe multiple gammas or multiple betas or multiple alphas at the same time-- I'm sorry-- in sequence, until it reached something that was stable, or stable enough that it's still around now. For example, there's no stable isotope of uranium. There's no isotope of uranium that will not undergo alpha or spontaneous fission. It's just that the half-life is so long, that there's still some left since the universe began. There's still a fair bit left. But you guys are going to actually calculate as part of your homework later in the course, how much uranium-235 was there when the earth was born? And how much has just disappeared because of the passage of time? So right now, it's typically about 0.7% U-235 by isotopic composition. It was not the case when the earth was born. But you guys will be able to figure that out. Yeah, so good question. And have a rant from me, I guess, in response. I'll try and keep my answers a little shorter. Oh, here's a crazy one-- not particularly crazy, though. So this is molybdenum 99 decaying to technetium-99 metastable. There's lots of possible decays, but the most likely one is right here. The state above the ground state at about 140 keV, a fairly low-energy and therefore more easy to detect photon. So if you notice, almost all the other excited states, with a couple exceptions, decay down to this 0.14 MeV excited state, at which point you get decays to the ground state. Those also have a rather long half-life. It's a few days. So you can make moly-99 in a reactor, transport it to a hospital, feed it to someone, and use these 140 keV gamma rays, because they come from the nucleus, to image whatever the technetium will bond to. Yep, Carson? AUDIENCE: [INAUDIBLE]? MICHAEL SHORT: The M stands for metastable. Now, where do you see it? This one. Yep. Because the direct decay-- you don't-- you never go from molybdenum-99 to technetium-99 at the ground state. The M stands for metastable, so it's an excited state. And metastable tells you that it's got a pretty long half-life. All of these other states are excited states. Metastable means it's kind of, sort of, stable on, like, a human time scale of things. It's not technically stable, because stable would mean infinite half-life or close enough. But metastable means long enough to be detected or used, or significantly longer than the others. Any other questions before I move on? Cool. So you can use these to image where something is in the body. For example, you can use this to highlight certain organs, highlight anything that will absorb technetium. Or if you attach, let's say, the technetium to a type of sugar or something else that will be uptaken by the body, you can see where it goes. And you can use gamma ray imagers to make kind of heat maps or radiation maps of where the technetium's going to find what could be causing the problem. The problem is-- well, our main problem is there are huge moly-99 shortages. Right now the only economically viable way to make molybdenum-99 is in reactors. And there's only a few of these places in the world that actually make them. And I don't see any on the US. We get ours from Canada. And these are slowly getting closed down as we go. So the question is, with millions of these diagnostic procedures per year, where is the moly-99 going to come from? That might be where some of you guys come in. If you can use the knowledge from this course to figure out an energetically and economically feasible way to make more moly-99, you're rich. That's, you know, life goal achieved. Space applications-- if we ever want to get off this earth for a significant amount of time, we have to deal with the fact that there's no atmosphere in space to shield us from the high-energy protons and other cosmic rays that would otherwise, well, destroy life. So there's a lot of interesting ideas, and a lot of problems with astronaut shielding. One of them is that the protons are so heavy-- I'm sorry-- the protons are so energetic that they're difficult to shield just by mass attenuation. And the trick here is, well, different radiation has different penetrating power. It depends on its energy, but it also depends a lot on its charge. For example, alpha particles can be stopped by a sheet of paper. These are the MeV level helium nuclei. Like, if you hold an alpha particle source in your hand, the dead skin on your hand stops the alphas from getting in. Remember that, because I'm going to be asking you a question later on to see who your friends are and who they aren't. I don't know if anyone knows what I'm talking about. But if you do, don't give it away. Beta particles or electrons have low mass and half the charge of an alpha particle. They can be able to get through paper, even through a little bit of plastic. But a small bit of metal can stop them. Gamma rays go right through. And notice that they've been drawn not quite being stopped by the concrete, which is a great shielding material. Because like I said before, you can exponentially attenuate gamma rays. You can't with all certainty stop every single one. So then how do you stop these high-energy charged particles if the more energetic they are, the more range they are? Boost your electromagnetic field. So it's actually been proposed to have spaceships with enormous magnetic fields or electromagnetic fields around them to deflect the protons away or around the ship. Because if you can't stop it by putting matter in the way, rely on the fact that they're charged particles, and will curve around whatever has got a high electric field around it. So this is one way of, let's say, shielding deep space missions. If you can't put more stuff in there because stuff is heavy, and launching stuff into space is expensive, rely on electromagnetism. And there are also RTGs, or Radio Thermal Generators, or Radio Isotope Thermoelectric Generators, which are little balls of things like plutonium or strontium that give off so many alpha particles. And the alpha particles have very low range. They deposit their kinetic energy as heat in the material, and cause them to glow on their own. If you produce enough heat, if something's glowing red, you can use thermal electric generators to capture that heat and turn it directly into electricity. This is how things like Voyager and, let's see, all the other space probes with interesting names are powered. Once you're too far from the sun for solar power to work, you need something that doesn't turn off. So you can use RTGs, which have long enough half-lives to produce significant amounts of power for a long time, but short enough half-lives so that their activity is pretty high. And they release a lot of energy as radiation. And that radiation is heavy charged particles which you can capture as heat. So yeah, an actual little sphere of plutonium that produces 100 watts just sitting there. There is no way to turn it off. That's the end of the sentence. It's plutonium. And finally, there's nuclear rockets. If you think about using a reactor for thrust instead of electrical energy, the design of the reactor gets very different. For example, you can start to let things get a whole lot hotter when there's no oxygen in space to oxidize things. And your propellant maybe would be liquid hydrogen that doesn't burn, but goes through the reactor and gets accelerated, turned into a gas with a high kinetic energy, to fire out the back of the rocket nozzle and provide the thrust that you need. And so it's nuclear rockets that would really be the only feasible way without bending space time, which I don't think we've really done yet, in order to get to very distant stars. Like that planet they just found orbiting Proxima Centauri-- four light years away. Pretty close, right? No. Not really. And if you think about how a nuclear rocket mission would work, well, it doesn't have to have nearly as much thrust, especially if you start from orbit. Maybe use a chemical rocket to launch yourself into orbit, and then spend half your journey accelerating very, very slowly. And then turn the rocket around, spend the other half of the journey decelerating very, very slowly. So you need a long, constant but low-level thrust for these long-live nuclear missions. I'm going to stop here because it's five minutes before the hour. We only have a few more of these things to go through. But what I will ask is you guys hang tight for the next few minutes while these guys take the cameras apart. We're going to go to my lab and see an application of nuclear which, like I said, is plasma sputter coating. All right, everyone. So welcome to my laboratory. This is the Mesoscale Nuclear Materials group, where we make and break materials for nuclear technology, usually not in that order. But whatever. We get it done somehow. And this is Reid Tanaka, one of my graduate students, who has actually repaired and going to show you the physical principles and operation of a sputter coater, which is nothing more than a controlled radiation damage machine. And he'll be making some interesting door prizes for you guys. REID TANAKA: Well, as Professor Mike Short said, this is Professor Mike Short's. He calls it something else. I call it the home-- the rehabilitative home for old, orphaned equipment and old graduates. All right. And so this is-- actually, this piece of gear here, I did a little research on it, and I think it was built about the same time as I was entering college 35 years ago. So about 1978, maybe 1980. That's how old this thing is. And now, so Professor Short goes around, as all of us, and we scrounge and we scab and we put stuff together. And you'll see that really indeed, we do that a lot. So this part, we put together out of a bunch of pieces of parts. If you look at it, and I'll talk [? to it ?] a little bit. But there's a procedure that we've got [? rigged ?] [? output. ?] But we don't know [INAUDIBLE],, so we just sort of throw them together. So what you're going to see is a little demonstration of what a sputter coater is. You're going to see a little [INAUDIBLE].. AUDIENCE: Could you scoot in here, so we can [? hear your mic? ?] REID TANAKA: It's under vacuum right now, vacuum pressure. This is our vacuum [? off. ?] AUDIENCE: Let me just-- here-- come right here so you can see. REID TANAKA: [INAUDIBLE] Again, there is another pressure indication, but we don't actually trust that one [? too much. ?] All right, without further ado, I'm going to power it on. We're going to [INAUDIBLE] [? center ?] [? a ?] high voltage. It's argon in here. We've got argon supplies in that bottle over there. And [INAUDIBLE] I just turn up the high voltage power supply. Turning it up-- MICHAEL SHORT: Should we get the lights, Reid? REID TANAKA: Voltage. Yeah. We can kill the lights. MICHAEL SHORT: I'll go get them. I'll just get them right over where you are. REID TANAKA: And if you see through this glass jar, it's going to be a little bit of a glow. Some of you might be able to see it already. MICHAEL SHORT: Oh, yeah. Come a little closer. From where I am, you start to see the glowing purple plasma. So that's the ionization of the argon causing it to electrostatically accelerate towards the gold target. And that's blasting off gold items that are then coating the stuff that Reid's coating that you'll see in a sec. But this is a controlled application of ionization and radiation damage using a couple of kilovolt argon ion-- don't know if you call it a beam, but at least in argon ion plasma. So there's a few other things to note. Remember how we talked about charged particles having a certain range in matter? Well, charged particles in, let's say, low-energy particles in the kV range do not have a very high range even in gases, which is why Reid has this vacuum pump connected. Otherwise, the argon wouldn't make it to where it has to do the damage. So when there's too much gas in there-- shuts off. When there's not enough argon in there, there's no argon to do the damage. So we're actually exciting about two kV ions. And their range is higher than the distance they have to travel, so they actually make it where they're supposed to go. And this is a kind of direct application of NSE, along with a fair bit of high voltage electronics. And that's pretty much all there is to it. REID TANAKA: OK. So we have about two minutes if you want to take a closer look and just [INAUDIBLE]. MICHAEL SHORT: Yeah. REID TANAKA: It's not going to hurt you. MICHAEL SHORT: No. Get right in there. Do you want to see-- if you look underneath, you'll actually see that blue glowing ring. That's actually a ring of gold that's being hit by the plasma. And that's causing gold ions to fire onto the target. REID TANAKA: You guys all took chemistry at some point, right? Well, they tell you that one of the great mysteries of the [? world ?] [? as ?] [? we ?] [? know it's ?] [? been solved-- ?] [INAUDIBLE] take something and turn it into gold. Well, you should know that only the nukes can do that. Right? Really, you got to get away from all the electrons and all that other chemistry stuff, and only the nukes. So if you really want to turn something to gold, you got to join the nuclear department. Just so you know that. Keep that in mind. MICHAEL SHORT: Has everyone had a chance to get a close-up look? REID TANAKA: And I think we got about another minute or so to let it run. MICHAEL SHORT: OK. Anyone have any questions about what you're seeing here? AUDIENCE: Is it getting, like, super hot in there, or-- MICHAEL SHORT: That's a good question. The temperature does not go up that much. There's certainly kinetic energy turned into thermal energy as the argon hits the gold, and the gold hits whatever you're trying to coat. But the total amount of energy, the density of that gas is extremely low. That's another reason why in fusion reactors, the plasma is up to like millions or tens of millions of Kelvin. There's just not a lot of it. So if you look at the total amount of stored thermal energy in a fusion reactor, it's quite low, even though the temperature or the relation to the average kinetic energy of the molecules is extremely high. So yeah. Good question. If you want, put your hand up inside of the chamber. Is it warm? AUDIENCE: [INAUDIBLE] AUDIENCE: It's not warm at all. MICHAEL SHORT: Not at all. Yep. AUDIENCE: The plasma is the argon? And where's the gold coming from? From in that [? ring ?] MICHAEL SHORT: There we go. REID TANAKA: [INAUDIBLE] we'll show you. MICHAEL SHORT: Yeah. We'll open it up and show you. So I'll get the lights on now. REID TANAKA: OK. [INAUDIBLE] AUDIENCE: Why isn't the pressure [INAUDIBLE]?? REID TANAKA: What's that? AUDIENCE: The pressure changed. [INAUDIBLE] REID TANAKA: Yeah. From the point that we-- when you walked in and saw that? AUDIENCE: Yeah. REID TANAKA: OK. The first thing-- so this had a sort of a static amount of argon in it. And when I turned on the voltage, the high voltage, that's to create the plasma. But then it has to get fed. And so what I ended up doing with this little knob here, I probably should explain that, is I was feeding it argon. That argon bottle from over here it's going into this chamber. When you're feeding the argon in, then the pressure came up. And if the pressure comes up too high on this particular instrument, then it has an automatic cut-off [? when ?] the high voltage [? cuts out. ?] Because otherwise, you know, one of the reasons why it works is because we have so few atoms in there. [INAUDIBLE] AUDIENCE: [? Good. ?] REID TANAKA: [INAUDIBLE] what atmospheric pressure is in [? total? ?] AUDIENCE: 760. REID TANAKA: 760. [INAUDIBLE] Maybe we [? actually ?] have a little bit of a [INAUDIBLE].. So this is that-- that's the gold ring. AUDIENCE: [? Neat. ?] REID TANAKA: And in the chamber-- MICHAEL SHORT: You can put the-- you can put the ring facing down for stability if you want. REID TANAKA: What's that? MICHAEL SHORT: I said, you can just put the ring lying flat down if you want for stability. Yeah. REID TANAKA: And in the chamber-- the purpose of having this, actually, this machine, the main reason that we use this for is if you have something that you want to put into a scanning electron microscope, and-- MICHAEL SHORT: We're actually going to use one of those in class, so-- yeah. REID TANAKA: You need to have some kind of conductive coating on it. So if you're looking at-- [? especially ?] like [? biologic ?] [? stuff, ?] you actually coat it with something that's conductive. So there you see, it has a gold coat of about, I would guess, I think for as long as we did it for, something on the level of 200 [INAUDIBLE].. It's a pretty thin coat. MICHAEL SHORT: That's all you need, though. REID TANAKA: Right. MICHAEL SHORT: Remember, after quiz number one, we will be piloting-- well, two of you guys we'll be piloting a scanning electron microscope down in the basement. And before we look at whatever samples you want to see, whether it's one of your eyelashes, dust on the floor, or a bug you found or something, we'll want to coat in gold, so that the electrons that we use for imaging will have a place to go. We'll have a conductive path, and they won't charge up, ruining the image. REID TANAKA: All right. I have a question. MICHAEL SHORT: Yeah? REID TANAKA: Is anybody-- was anybody here born this Millennium? Anyone? 2000 or later? Nobody. Anyone in 1999? Nobody in 1999? [INAUDIBLE] How about '98? You were born in '98? MICHAEL SHORT: There we go. REID TANAKA: All right. That would be-- [INAUDIBLE] brought my glasses. I've got that 1998 dime here. It's now gold-coated. And you can have it. [INAUDIBLE] [? too ?] [? bad. If ?] [? somebody ?] [? wanted ?] [INAUDIBLE] you got a quarter. You [? could lie ?] [? and ?] [? say-- ?] But I guess you are the youngest [INAUDIBLE]. Yeah. All right. So you win this. Now, it'll rub off. AUDIENCE: OK. REID TANAKA: So [INAUDIBLE] you-- AUDIENCE: [? I'll ?] [? probably ?] keep it in a plastic bag. REID TANAKA: And-- MICHAEL SHORT: [? Gold ?] [? change. ?] REID TANAKA: I guess the other ones go to people that [INAUDIBLE]. Oh, no '97. How about a '96? You're a '96? [INAUDIBLE] AUDIENCE: [? There's ?] [? three. ?] REID TANAKA: What's that? AUDIENCE: There's three '96s, REID TANAKA: Oh, there's three '96s? OK. You get the-- you get the [INAUDIBLE].. AUDIENCE: [INAUDIBLE] over a year old. REID TANAKA: All right. Who wants the nickel that's a '96? AUDIENCE: [INAUDIBLE]. REID TANAKA: You going to arm wrestle for it? MICHAEL SHORT: There you go. Right in front of you, Reid. REID TANAKA: Well, there's three of them. OK. MICHAEL SHORT: OK. REID TANAKA: [INAUDIBLE]. AUDIENCE: Thank you. REID TANAKA: And who are the other two '96s? You're going to have to arm wrestle. One gets a quarter, but one gets the dime. MICHAEL SHORT: [INAUDIBLE]. Yep. REID TANAKA: That's the only fair way of doing [? it, I think. ?] MICHAEL SHORT: That's not a nuclear thing, but if it's fun. REID TANAKA: [INAUDIBLE]. Here you go [INAUDIBLE]. AUDIENCE: [INAUDIBLE]. REID TANAKA: And there is the quarter [INAUDIBLE].. AUDIENCE: Thanks. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: This one? AUDIENCE: [INAUDIBLE]. AUDIENCE: Yes. [? Because ?] [? it ?] [? was ?] [INAUDIBLE].. MICHAEL SHORT: [INAUDIBLE] actually [INAUDIBLE].. It's just-- AUDIENCE: It's [? memorable ?] [INAUDIBLE].. REID TANAKA: OK. Yeah? AUDIENCE: So is that gold deposited anywhere else in that chamber? REID TANAKA: Yeah. If you look-- actually, if you look in the chamber, I mean, this is all- this is all [? from it ?] [? being ?] [? sputtered. ?] And if you look around-- I can turn this a little. Look at the glass. It gets on the glass, too. So it actually gets-- it gets everywhere. But it's mostly directed to that area that you saw [INAUDIBLE].. I have another offer. Are you guys all nukes? You guys are all going to be in the department? MICHAEL SHORT: All but one. REID TANAKA: Ah. AUDIENCE: [? Not me. ?] MICHAEL SHORT: But we have a nuke enthusiast. So-- otherwise, wouldn't be in this class. And anyone scared of nuclear is probably not in this class. REID TANAKA: So obviously, it was pretty easy for us to do. We have this machine here. If you're going to be part of our department, if you want to just come in, we can make you a-- we can make you a quarter, OK? I mean, I could even supply them. I feel rich [? enough ?] I can [INAUDIBLE],, because all this grad student-- graduate school money I'm getting. MICHAEL SHORT: Nice. REID TANAKA: But-- anything else? MICHAEL SHORT: Any questions for Reid on what you just saw? The goal is to sort of give you a real life, you know, learn 22.01, you'll understand how these things work, and how you can modify them, create new stuff. That's the general idea. Same thing behind looking at the electron microscope for the focused ion beam, EDX elemental analysis. I want to bring what we're teaching you to life as often as we can. Since we only got one recitation a week, we'll be doing it about that much. Once in a while I may schedule some extra stuff as long as folks are available. But we're going to try all we can to have days like this, where you get to see what you're learning in real life. AUDIENCE: It was called a sputter? REID TANAKA: Sputter coater. AUDIENCE: Sputter coater. REID TANAKA: Sputter coater. MICHAEL SHORT: It's because the process of the argon hitting the gold is actually known as sputtering, which is the blasting off of surface atoms by energetic particles. It's a controlled form of radiation damage. AUDIENCE: What's that Swagelok? MICHAEL SHORT: Swagelok. AUDIENCE: Swagelok. REID TANAKA: Yeah. AUDIENCE: So what's a Swagelok? REID TANAKA: Well-- MICHAEL SHORT: Do we have any pieces here? Let's see. REID TANAKA: [? We ?] [? have ?] [? lots. ?] [INAUDIBLE] if you go back [? around ?] [INAUDIBLE] tubing that you see back in there. They're connected so they don't [INAUDIBLE].. [INAUDIBLE] [? piping ?] [INAUDIBLE].. It's proprietary-- MICHAEL SHORT: They're all in use. REID TANAKA: --made by the Swagelok companies to [INAUDIBLE]. MICHAEL SHORT: Oh, here we go. REID TANAKA: So I don't know if Mike's going to take you around. AUDIENCE: [? Absolutely ?] [INAUDIBLE].. MICHAEL SHORT: This is Swagelok tubing. It's got a two piece ferrule, which [INAUDIBLE] a metal-to-metal seal for moving liquid or gas. And it takes an insanely high pressure. So actually, over in the next room, we can look on our way out, we've built a reactor simulator, like an experimental reactor that replicates all the conditions except for the radiation. We had to make it entirely out of Swagelok tubing, because this stuff can hold the pressure and the temperature without deforming too much. So when you want to make it absolutely airtight seal, use things-- Swagelok or things like it. AUDIENCE: Is it stainless steel? MICHAEL SHORT: This is stainless steel. Yep. But they make it out of titanium or other things, too. But stainless steel works for us. AUDIENCE: That's cool. AUDIENCE: In a PWR, how much pressure is there? MICHAEL SHORT: In a PWR, there's 150 atmospheres of pressure. It's also 150 atmospheres of pressure over in the room next door. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Like I said, it's all the same conditions as a reactor except the radiation. The pressure is what makes it really dangerous. AUDIENCE: [INAUDIBLE] MICHAEL SHORT: Yeah. We'll look through one of the few bulletproof glass shields. Because if anything blows on that loop, it's like a proj-- it is a projectile. We've only had one explosion. There was no temperature at the time, but it sounded like a shotgun blast over the side of your shoulder. It was loud. AUDIENCE: You in the room? MICHAEL SHORT: The loop was right here, and we were right in front of it. Then the loop jumped up maybe an inch. And we jumped up about three feet. We got scared. That was what happens when you improperly torque a high-pressure fitting. Because you've actually got to tighten these nuts down to not too low and not too high of a torque, otherwise, they don't seal. And usually, you only find out that they don't seal when they're approaching close to their rated thing. And you're like, great. It's at half pressure. It's OK. You reach 99% pressure and kaboom. That's what happened. Cool. Thanks a lot, Reid, for showing this to us and taking time out of your day. REID TANAKA: No problem. Anytime. MICHAEL SHORT: I hope you guys enjoyed it. So no problems to work through this week. That's going to change starting Tuesday, next class. So have a good weekend. And I'll see you guys all on Tuesday in Room 24-115, next to the room where we were just in. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 7_The_Scaling_Hypothesis_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So we've been trying to understand critical points. And this refers to the experimental observation that in a number of systems we can be changing some parameters, such as temperature, and you encounter a transition to some other type of behavior at some point. So the temperature, let's say, in this behavior is the control parameter. And you have to see, for example, this will be normal to superfluid transition. You have one now [INAUDIBLE] change temperature and going through this point. For other systems, such as magnets, you actually have two knobs. There is also the magnetic field. And there, you have to turn two knobs in order to end at this critical point, also in the case of the liquid gas system in the pressure temperature plane, you have to tune two things to get this point. And the interesting thing was that in the vicinity of his point, the singular parts of various thermodynamic quantities are interestingly independent of the type of material. So if we, for example, establish a coordinate at t and h describing deviations from this critical point, we have, let's say, the singular part of free energy as a function of t and h has a form like t to the 2 minus alpha, some scaling function ht to the delta, and these exponents, alpha and delta, other things that are universe. For example, we could get from that by taking two derivatives with respect to h, the singularity and the divergence of the susceptibility. And we said that the diverging susceptibility also immediately tells you that there is a correlation then that diverges, and in particular, we indicated its divergence to an exponent u. WE could for that also establish a scaling form on how the correlation then diverges on approaching this point generally in the ht plane. So this was the general picture. And building on that, we made one observation last time, which is that any point when you are away from h and t equals to 0, you have a correlation length. And then we concluded that if you are at t and h equals to 0, you have a form of scaling vertices. And basically what that means is that when you are at that point, you look at your system, it's a fluctuating system, and the fluctuations are such that you can't associate a scale with them. The scale has already gone into the correlation length that is infinite. And we said that therefore, if I were to look at some kind of a correlation function, such as a magnetization in the case of a magnet, that the only way that it became its separation is as a power of a distance. And this clearly has a property that if we were to rescale x and y by a certain amount, this correlation function nearly gets multiplied by a factor that is dependent on this rescale. And this is after we do the averaging, so it's a kind of statistical self-singularity, as opposed to some factor such as Sierpinkski gasket, which are identically and deterministically self-similar in that each piece, if you blow it up, looks like the entire thing. So what we have in our system is that if we have, let's say, a box which could be containing our liquid gas system at its critical point, or maybe a magnet at its critical point, will have a statistical field, this m of x. And it will fluctuate across the system. So maybe this would be a picture of the density fluctuation. What I can do is to take a scan along some particular axis-- let's call it x-- and plot what the fluctuations are of this magnetization. Let's say m of x. Now the average will be 0, but it will have fluctuations around the average. And so maybe it will look something like this-- kind of like a picture of a mountain, for example. Now one thing that we should remember is that this object would be piece of iron or nickle, and clearly I don't really mean that this is what is going on at the scale of a single atom or molecule of my substance. I had to do some kind of averaging in order to get the statistical field that I'm presenting here. So let's keep in mind that there is, in fact, some implicit analog of lattice size or some implicit shortest distance, shortest wavelength that I allow for my frustrations. Now I can sort of make this idea of scale invariance of a set of pictures, such as this one, more precise, as follows, by going through a procedure that I will call renormalization that has the following three steps. So the first step, what I will do is to coarse-grain further. And by this, I mean averaging m of x over a scale ta. So previously, I had done my averaging of whatever means, et cetera. We're giving contribution to the overall magnetization over some number, let's 100 by 100 by 100 spins and a was my scaling distance. Why should I choose 100? Why not choose 200, some factor of what I had originally? So coarse-graining means increasing this minimum length scale from a to ba. And then I define a coarse-grained version of my field. So previously, I had m of x. Now I have m tilda of x, which is obtained by averaging, let's say, over volume around the point x that I had before. And this volume is a box of dimension ba to the d. And then I basically average over that. I guess let's call it original distance a equals 1, so I don't really have to bother by the dimensionality of y, et cetera. OK? So if I were to apply that to the picture that I have up there, what do I get? I will get an m tilda as a function of x. And essentially, let's say if I were to choose a factor of b that was like 2, I would take the average of the fluctuations that they have over 2 of those of those intervals. And so the picture that I would get it would be kind of a smoothened out version of what I have before over there. I will still have some fluctuations, but kind of ironed out. And basically, essentially, it means that if you were to imagine having taken a photograph, previously you had the pixel size that was 1. Now your pixel size is larger. It is factor of b. So it's this kind of detuning and averaging of the fluctuations that has gone. And so you have here now b. Now if I were to give you a photograph like that and a photograph like this, you would say that they are not identical. One of them is clearly much grainier than the other. So I say, OK, I can restore some amount of similarity between them by doing a rescaling. So I call a new variable x prime to be my old variable x divided by a factor of b. So when I do that to this picture, I will get m tilda as a function of x prime. x prime can go in further less, because all I do is I take this and squeeze it by a factor of b. So I will get a picture that maybe looks something like this. Now if I were to look at this picture and this picture, you would also see a difference. That is, there is a contrast. So here, there would be, let's say, black and white. And as you scan the picture, you sort of see some variation of black and white. If you look at this, you say the contrast is just too big. You have big fluctuations as you go across compared to what I had over there. So there's another step, which is called renormalize, which is that you define m prime to be m by a change of contrast factor zeta. So you take a knob that corresponds to contrast and you reduce it until you see pictures that kind of statistically look like what you started with. So in order to sort of generate pictures that are self-similar, you have this one knob. Basically, scaling variance means the change of size. But there is associated with change of size a change of contrast for whatever variable you are looking at. It turns out that that change of contrast would eventually map to one of these exponents that we have over there. Yes. STUDENT: Are you using m or n tilda? PROFESSOR: m tilda, thank you. So I guess the green is m tilda of x prime, and the pink is m prime of x prime. So what I have done mathematically is as follows. I have defined an m prime of x prime, which is 1 over zeta, this contrast factor b to the d because of the averaging over a volume that involved b to the d pixels of the original field centered at a location that was bx prime plus y. So in principle, I can go and generate lots and lots of configurations of my magnetization, or lots and lots of pictures of a system at the liquid gas critical point, or magnetic systems at their critical point. I can generate lots and lots of these pictures and construct this transformation. And associated with this transformation is a change of probability, because there was some probability-- let's call it P old, that was describing my original configurations m of x. Let's forget the vector notation for the time being. Then there will be, after this transformation, probability that describes these configurations m prime of x prime. Now you know that averaging is not something that you can reverse. So this transformation going from here and here, I cannot go back. There are many configurations over here that would correspond to the same average, like up, down or down, up would give you the same average, right? So a number of possibilities here have to be summed up to generate for you this object. Now the statement of self-similarity presumably is that this weight is the same as this weight. You can't tell apart that you generated configurations before or after that scaling. So this is same at critical point. I've not constructed either weight, so it really doesn't amount to much. But Kadanoff introduced this concept of doing this and thinking of it as a kind of group operation called renormalization group that I describe a little bit better and evolve the description as we go along. So if I look at my original system, I said that self-similarity occurs, let's say, exactly at this point that corresponds to t and h equals to 0. Now presumably, I can, in some sense, force these things, if I were to take its log, for example. I can construct some kind of a weight that is associated with m, and this would be a new weight that is associated with m prime. Presumably, right at the critical point, these two would be the same weight, and it would be the same Hamiltonian. What happens, if I do this procedure, to a system that is initially away from the critical point? So my initial system is characterized by deviations t and h from this scale in variant ways, which means that over here I have a correlation length. Now I go through all of these transformations. I can do those transformations also for a point that is not at the critical point. But at the end of the day, I certainly will not get back my original weight, because I look at the picture after this transformation. Before the transformation, I had a long correlation length, let's say a mile. When I do this transformation, that correlation length is reduced by a factor of b. So the new system has deviated more from the critical point. Because the further you go away from the critical point, you have a larger correlation length. So the idea is that right at the critical point, the two weights are the same. Deviation from the critical point is described by these two parameters, t and h. And if you do the renormalization procedure on a Hamiltonian that deviates, you will get a Hamiltonian that more deviates, still describable by parameters t and h that have changed. So again, this says that c was, in fact, b times c of t prime and h prime, and t prime and h prime are further away. Now the next thing that Kadanoff said was, OK, therefore there is a transformation that tells me after I do a rescaling by a factor of b how the new t and the new h depend on the old t and the old h. So there is a mapping in this space. So a point that was here will go over there. Maybe a point that is here will map over there. A point that is here will map over here. So there is a mapping that tells you how th get transformed under this procedure. Actually the reason this is called a renormalization group, groups we are really thinking usually in terms of operations that are invertible. This transformation is not invertible. But this is a mapping. So potentially this mapping is invertible. You can say that if this point came from this point under inversion, it will go back to the original point, and so forth. The next part of the argument is what did we do over here? We got rid of some short wavelength fluctuations. Now one of the things that I said right at the beginning was that as long as you are getting rid of short scale fluctuations, you are summing over a cube that his 100 square, 200 cube. It doesn't matter, 100 cube, 200 cube-- you are doing some analytical function. So the transformation that relates these to these, the old to new, should be analytical, and hence you should be able to write a Taylor series for it. So let's try to make a Taylor series for this. Taylor series start with a constant. But we know that the constant has to be 0 in both cases because the starting point was the point that was scale invariant and was mapping onto itself. So the first thing that I can write down are linear terms. So there could be a term that is proportional to t. There could be a term that is proportional to h. There could be a term here that is proportional to h. There could be a term that is proportional-- well, let's call this t. Let's call this h. And then there will be terms that will be order of t squared and higher. So I just did an analytical expansion, justified by this summing over just finite degrees of freedom at short scale. Now if I have a structure, such as the one that I have over there, I also know some things on the basis of symmetry. Like if I'm on the line that corresponds to h equals to 0, there is no difference between up and down. Under rescaling, I still don't know the difference between up and down. So I should not generate an h if h was originally 0 just because t deviated from 0. So by symmetry, that has to be absent. And similarly, by symmetry, there is no difference between h positive and h negative. As far as t is concerned, h and minus h should behave the same. So this series should start at order of h squared and not h, so that term should be absent. So at this level, we have a nice separation into t prime is at and h Prime. Is dh. Now we know something more, which is that the procedure that we are doing has some kind of a group character, in that if I, let's say, originally transform by some factor b1, change by a factor of 2, then change by a factor of 3, the answer is equivalent to changing by a factor of 2 times 3, or 3 times 2. Doesn't matter in which order I do them. So also, I would get, if I were to do b1 first and b2 later, it would be the same thing. So what does that imply? That if I do two of these transformation, I find that my new t is obtained in one case by the product, in the other case by the product of the two a's. So that's, again, some kind of a group character. And furthermore, if I don't change the length scale, everything should stay where it is. So you glance at those, and you find that there is only one possibility, that a as a function of b should be b to some power. So you know therefore that at the lowest order under rescaling by a factor of b, t prime should be b to some y-- I called it yt-- times t plus higher orders, while h prime is b to some other power of yh times h plus higher orders. And you say, OK, fine. What's this good for? Well, let's take a look at what we did over there. We said that I take some bunch of initial configurations, sum their weights to get the weight of the new configuration. What happens if I sum over all initial configurations? Well, if I sum over all initial configuration, I will get the partition function. Now essentially, all the original configurations I regrouped and put into these coarse-grained configurations that are weighted this way. So there could be an overall constant that emerges from this. But this really implies that the singular part of log z, and presumably this depends on how far away I am from the critical point, is the same as log z that singular after I do this t prime and h prime. Now there is one other issue, which is extensivity. Up to signs, factors of beta, et cetera, this is b times an intensive free energy, which is a function of t and h. So this is the same as v prime, because the volume shrunk. I took all of my scales and shrunk it by a factor of v, v prime, f of t prime and h prime. So now let's go this way. Note that v prime is the original v divided by b to the d scaling factor. So you do the divisions here, and you find that f as a function of t and h is the ratio of v prime to v, which is b to the minus d, f as a function of t prime and h prime. But t prime we said to lowest order is b to the yt t. h prime is b to the yh h. This is actually the more correct form of writing a homogeneous function. So previously in last lecture, we assumed that the free energy had a homogeneous form. Now subject to these conditions and assumptions of renormalization group, we have concluded that it should have that homogeneous form. Now you say this homogeneous form does not look like the homogeneous forms that I had written for you before. I say, OK. Presumably this is true for any factor of b that I want to choose. Let me choose a b, a rescaling factor such that v to the yt t is of the order of 1. Could be 1, could be pi. I don't care. Which means that I chose a factor of b that will scale with t as t to the minus 1 over yt. I put this b-- this expression is true for all choices of b. If I chose that particular value, what I get is t to the d over yt, some function. First argument has now become 1 or some constant. Really it only depends on the second argument in the combination h and t to the power of yh over yt. So you can see that this is, in fact, the same as the first line that I have above. And I have identified that 2 minus alpha would be related to this factor of yt, which is how you would scale under renormalization, the parameters t and h. And the gap exponent is related to the ratio of yh over yt. Similarly, we had that the correlation length-- I have a line there. Psi of t and h is b psi of t prime and h prime. So I have that psi as a function of t and h is b times psi as a function of b to the yt t, b to the yh h. So that's also correct. I can again choose this value of v, substitute it over there. What do I get? I get that psi as a function of t and h would be t to the minus 1 over yt, some scaling function-- let's call it g psi-- of, again, h to the power of yh over yt. So I have got an answer that nu should be 1 over yt. I can get the scaling form for the correlation length. I identify the divergence of correlation length with inverse of this. And by the way, I get, if I substitute nu as 1 over yt here, the Josephson hyperscale in relation to minus alpha equals to b. I can go further if I want. I can calculate magnetization as a function of t and h, would correspond to basically the behaviors that we identify with exponents beta or delta as d log z. Yeah, let's say f by dh. If I take a derivative over there, you can immediately see that what that gives me is b to the power of yh minus d, and then some scaling function which is the derivative of this scaling function b the yt t, b to the yh h. And again, if I make this choice, then this goes over to t to the power of d minus yh over yt, and then some scaling function of h t to the delta. So I can continue with my table. And for example, I will have beta to be d minus yh divided by yt. I can go and calculate delta, et cetera. Actually I was a little bit careless with this factor zeta, which presumably is implicit in all of these transformations that I have. And I have to do special things to figure out what zeta is so that I will get self-similarity right at the critical point. But we can see that already we have the analog of a rescaling for m. And so it is easy to sort of look at those two equations and identify that my zeta should be precisely this one. So the zeta is not independent of the relevance of the magnetic field. And if you think about it, the field and the magnetization are conjugate variables in the sense that in the weight here, I will have a term that is like hm-- integrated, of course. And so hm integrated, you can see that up to a factor of b to the d from integration, the dimensionality that I assign to h and the dimensionality that I assign to m should be related. And not only for the magnetization, but for any pair of variables that are so conjugate-- there's some f, and there's some x-- there will be a corresponding relation between what would happen to this x at the critical point and this factor f when I deviate from the critical point. So all of this is kind of nice, but it's a little bit hand waving. I essentially traded one set of assumptions about homogeneity and scaling of free energy correlation length to some other set of assumptions about two parameters moving away from a scale invariant critical point. I didn't calculate anything about what the scale invariant probability is. I didn't show that, indeed, two parameters are sufficient, that this kind of scaling takes place, et cetera. So we need to be much more precise if we want to do, ultimately, calculations that give us what these numbers yt and yh are. So let's try to put this hand waving on a little bit more firm setting. So let's see how we should proceed. We start with some experimental system, critical point. So I tell you that somebody in the experiment, the liquid gas system, they saw a diverging correlation length, critical opalescence, et cetera. So then I associate with that some kind of a statistical field. And let's kind of stick with the notation that we have for the magnet. Let's call it m of x. And in general, this would be the part where one needs to put in a lot of thinking. That is, the experimentalist comes and tells you that I see a system that undergoes a phase transition. There are some response functions that are divergent, et cetera. You have to put in some thinking to think about what the appropriate order parameter is. And based on that order parameter or statistical field, you construct the most general weight consistent with symmetries, with not only asymmetries but the kind of assumptions that we have been putting in play. So we put in assumptions about locality, symmetry. Stability is, of course, paramount. But there is a list of things that you have to think about. So once you do that you say, OK, I associate with my configurations m of x some set of probabilities. Probabilities are certainly positive. So I can take its log, call its minus its log to be some kind of a weight, beta h, that governs these m of x's. If I say that I'm obeying locality, then I would write the answer, for example, like this. But it doesn't have to be. I have to write some particular example. But you may construct your example depending on the system of interest. And let's say we are looking at something like a superfluid, maybe, that we don't even have the analog of magnetic field, and we go and construct terms that are symmetric and made for a two component m. And I will write a few of these terms to emphasize that this is, in principle, a long list. There's coefficient of m to the sixth. We saw that the gradient terms could start with this k. But maybe there's a higher order gradient, and there's essentially and infinity of terms that you can write down that are consistent with these assumptions that you have made so far. So you say, OK. Now I take this, and implicit in all of these calculations is, indeed, some kind of a short scale cutoff. To construct the statistical field, I do apply the three steps of RG-- renormalization group, as I described before. And this will give me a new configuration for each of the old configurations through the formula that I gave you over there. So in principle, this is just a transformation from one set of variables to a new set of variables. So if I do this transformation, I can calculate the weight of the new configurations, m prime of x prime. I can take minus the log of that. And again, up to some constant, it will be the same as a probability. So there could be, in this procedure, some set of constants that are generated that don't depend on m. And then there will be a function that depends on m prime of x prime. Now the statement is that since I wrote the most general function over here, whatever I put here will have to have exactly the same form, because I said put anything that you can think of that is consistent with symmetries over here. So you put everything there. What I put here should have exactly the same functional form, but with coefficients that have changed. So you basically prime everything, but you have this whole thing. Now this may seem like truly difficult thing. But we will actually do this. We will carry out this transformation explicitly in particular cases. And we will show that this transformation amounts to constructing a rescaling of each one of these parameters-- t prime, u prime, v prime, k prime, l prime, and so forth-- as functions of the old parameters. So this is, if you like, a mapping. You take some set of parameters-- t, u, v, k, l, blah, blah, blah-- and you construct a mapping, s prime, which is some function of the original set of parameters. So this is a huge dimensional space. Any points that you start on the transformation will go to another point. But the key is that we wrote the most general form that we could, so we had to stay within this space. So why are you doing this? Well, I started by saying that the key to this whole thing is have to having a handle as to what this self-similar scale invariant probability is. I can't construct that just by guessing. But I can do what we usually do, let's say, in constructing wave functions in quantum mechanics that have some particular symmetry. Maybe you start with some wave function that doesn't have the full symmetry, and then you rotate it and rotate it again, and you average over all of them, and you end up with some function that has the right symmetry. So we start with a weight that I don't know whether it has the property that I want. And I apply the action of the group, which is this scaling variance, to see what happens to it under that transformation. But the point that I am interested, or the behavior that I am interested, is where I basically get the same probability back. So I'm very interested at the point where, under the transformation, I go back to myself. And that's called a fixed point. So S is a shorthand for this infinite vector of parameters. I want to find the point s star in this parameter space. Actually, let me call this transformation R and indicate that I'm renormalizing by a scale b, such that, when I renormalize by a scale b, my original set of parameters, if I am at this fixed point, I will end up at that point. So clearly, this is a system that has exactly these properties that I was harping in at the beginning. This is the point that is truly scale invariant. That's the point that I want to get at. So again, once we have done this transformation in a specific case, we'll figure out what this fixed point is. But for the time being, let's think a little bit away from this and deviate from fixed point. So I start with an initial point S that is, let's write it, S star plus a little bit away. Just like in the picture that I have here, I started with a fixed point, and I said I go away by an amount that I had parameterized by t and h. Now I have essentially a whole line of deviations forming a vector. I act with Rb on this, and I note that if delta S goes to 0, then I should go back to S star. But if delta S is small, maybe I can look at the delta S prime, which is a linearized version of these transformations. So basically these transformations are highly nonlinear just as the transformation over here, in principle, would have been highly nonlinear. But then I expanded it around the point t and h equals to 0. Similarly, I'm assuming that this delta S is small, and therefore delta S prime can be related to delta S through the action of a matrix that is a linearized version. Let's call it here RL of b. So this is a linearized transformation, which means that it's really a matrix. In this particular case, in principle, I started with a 2 by 2 matrix. The off diagonal terms were 0, so it was only the diagonal terms that mattered. But in general, it would be a matrix, which would be the square of whatever the size of the parameter space is that I am looking at. Now then you have a matrix, it's good always to think in terms of its eigenvalues and eigendirection. In this problem that I had over here, symmetries had already diagonalized the matrix. I didn't have off diagonal terms. But I don't know here. It could be all kinds of off diagonal terms. So the properties are captured by diagonalize, RL, which means that I find a set of vectors in this space-- let's call them Oi-- such that under action of this, I will get lambda Oi, lambda i. Of course, the transformation depends on the rescaling parameter, so there should be a b here. Now of course, you will get a totally different matrix for each b. So is it really hopeless that for each b I have to look at a new matrix, new diagonalization, et cetera? Well, exactly this thing that we had over here now comes into play, because I know that if I make a transformation size b1 followed by a transformation size b2, the answer is a transformation size b1, b2. And it doesn't matter in which order I do it. AUDIENCE: Can't you just mix notation, because L used to be [INAUDIBLE]? PROFESSOR: Sorry. So in particular, I see that these linearized matrices commute with each other for different values of b. And again, from your quantum mechanics, you probably know that if matrices commute then they have the same eigenvectors. So essentially, I was correct here in putting no index b on these eigenvectors, because it's independent of eigenvector, whereas the eigenvalues, in principle, depend on b. And how they depend on b is also determined by this transformation, that is lambda i of b1, lambda i of b2 should be the same thing as lambda b1, b2. And of course, lambda i of 1 should be 1. If you don't change scale, nothing should change. And this is exactly the same set of conditions as we have over here, which means that we know that the eigenvalue's lambda i can be written as b to the power of some set of yi. So we just generalized what we had done before, now to this space that includes many parameters. So the story is now something like this. There is this multi-dimensional space with lots and lots of parameters-- t, u, v, blah, blah, blah, many of them. And somewhere in this space of parameters, presumably there is a fixed point, S star. Now in the vicinity of that S star, I have established that there are some particular directions that I can obtain by diagonalizing this. So let's imagine that this is one direction, this is another direction, this is a third direction. And that if I start with a beta h-- well, actually, let's do this. That is, if I start with an S that is S star plus whatever is a projection of my components are along these different dimensions, so let's call them, let's say, ai along these Oi hat-- just make sure we kind of think of them as vectors-- that under rescaling, I will go to S prime, which is S star plus sum over i, ai b to the yi Oi. That is, some of these directions, the component will get stretched if yi is positive. It will get diminished if yi is negative. And so now some notation comes into play. If yi is positive, the corresponding direction is called relevant. Eigendirection is relevant. If yi is negative, the corresponding eigendirection is irrelevant. And very occasionally, we may run into the case where yi is 0. And there is a terminology. The corresponding eigendirection is marginal. And what that means is that I need to resort to higher order terms to see whether it is attracted or repelled by the fixed point. So we need higher orders. After all, so far I have only linearized the transformation. Now the set of irrelevant directions to this particular S fixed point, S star, defines basing of attraction of S star. So let me go back to the picture that I have over here and be precise and use the arrow going away as an indication that the corresponding b is positive, and I'm forced out along this direction. Let me choose going in as an indicator that the corresponding y is negative. And as I make b larger and larger, I shrink along this axis. So in this three dimensional representation that I have over there, I have one relevant direction and two irrelevant directions. The two irrelevant directions will define the plane in this three dimensional space, which is the basing of attraction. So basically these two define a surface, and presumably any point that is in this surface in the three dimensional picture under looking at larger and larger things will get attracted to the fixed point. If you are away from the surface, maybe you will approach here, and then you will be pushed out. All right, fine. Now let's go and look at the following. We have a formula, psi of t and h. Or quite generally, psi under rescaling is b times the new psi. Or the new psi under any one of these transformation, psi prime, is the old psi divided by the old correlation length divided by a factor of b. So if I look at the fixed point-- so if I ask what is psi at the fixed point-- then under the transformation, I have the same parameters. So psi at the fixed point should be the psi of the fixed point divided by b. There are only two solutions to this. Either psi of S star is 0 or psi of S star is infinite. Now we introduce physics. Psi being 0 means that I have units that are completely uncorrelated to each other. Each one of them does whatever it wants. So this describes essentially, let's say, a system of infinite temperature. Every degree of freedom does whatever it wants. Well, I should say this corresponds to disordered or ordered phases. Because after all, we said that when we go to the ordered states also, there is an overall magnetization, but fluctuations around the overall magnetization have only a finite correlation length. And as you go further and further into the ordered phase, that correlation length shrinks to 0. So there is a similarity between what goes on at very high temperature and what goes on at very low temperature as far as the correlation of fluctuations is concerned. There is, of course, a long range order in one case that is absent in the other. But the correlation of fluctuations in both of those cases basically becomes finite, and under rescaling, goes all the way to 0. And clearly this is the interesting case, where it corresponds to critical point. So we've established that, once we found this fixed point, that those set of parameters are what can give us the scale invariant behavioral that we want. Now this list is hundreds of parameters. So this point corresponds to a very special point in this hundreds of parameter space. So let's say there is one point somewhere there which is the fixed point. And then you take your magnet and you change your critical temperature, are we going to hit that point? The answer is, no. Generically, you are not going to hit that point. But that's no problem. Why? Because if this basing of attraction. Because for any point on basing of attraction, I do rescaling, and I find that psi prime is psi over b. It becomes smaller. So you generically tend to become smaller. But ultimately, you end up at this point. And this point, the correlation length is infinite. So any point on this basing of attraction, in fact, has infinite correlation length. So every point on the basis of psi prime equals to psi, and hence psi has to be infinite. Yes. AUDIENCE: Question. Why should there be only one fixed point? PROFESSOR: There is no reason. AUDIENCE: OK. So this is just an example? PROFESSOR: Yeah. So locally, let's say that we found such a fixed point. Maybe globally, there is hundreds of them. I don't know. So that will always be a question in our minds. So if I just write down for you the most general set of transformations, who knows what's happening? Ultimately, we have to be guided by physics. We have to say that, if in the space of all parametrization, there are some that have no physical correspondence, we throw them out, we seek things that can be matched to our physical system. Yes? AUDIENCE: If there are multiple fixed points, do the planes of the basing of attraction have to be parallel to each other? PROFESSOR: They may have to have some conditions on non-intersecting or whatever. These are only linear in the vicinity of the fixed point. So in principle, they could be highly curved surfaces with all kinds of structures and things that I don't know. Yes? AUDIENCE: Is there any reason why you might or might not have attracting point that is actually a more complicated structure, like say, a limit cycle or even a [INAUDIBLE]? PROFESSOR: Yeah. So again, we are governed ultimately by physics. When I write these equations, they are as general as equations as the people in dynamical systems use that also includes cycles, chaotic attractors, all kinds of strange things. And we have to hope that when we apply this procedure to an appropriate physical system, the kind of equations that we get are such that their behavior is indicative of the physics. So there is one case I know where people sort of found chaotic renormalization group trajectories for some kind of a [INAUDIBLE] system. But always, again, this is a very general procedure. We have to limit mathematics, ultimately, by what the physical process is. So it's good that you know that these equations can do all kinds of strange things. But then we take a particular physical system, we have to beat on them until they behave properly. So let's imagine that we have a situation, such as this, where we have three parameters. Two of them are irrelevant. One of them is relevant. Then presumably, I take my physical system at some temperature and it would correspond to being on some point in this phase diagram. Some color that we don't have. Let's say over here. And I change the temperature. And I will take some trajectory-- in this case, three dimensional space. And this is a line in this three dimensional space. And experimentally, I've been told that if I take, let's say, my piece of iron and I change temperature, at some point I go through a point that has infinite correlations. So I have to conclude that my trajectory for iron will intersect with surface at some point. And I'll say, OK, I take nickel. Nickel would be something else. And I change temperature of nickel, and I will be doing something completely different. But that experimentalist also has a point where you have ferromagnetic transition, so it must hit this surface. Then you do cobalt, where some other trajectory comes and hits off the surface. Now what we now know is that when we rescale the system sufficiently, all of them ultimately are described at the point where they have infinite correlation length by what is going on over here. So if I take iron, nickel, cobalt, clearly at the level of atoms and molecules, they are very different from each other. And the difference between ironness, nickelness, cobaltness is really in all of these irrelevant parameters. And as I go and look at larger and larger scale, they all diminish and go away. And at large scale, I see the same thing, where all of the individual details has been washed out. So this is able to capture the idea of universality. But there is a very important caveat to this, which is that the experimental system, whether you take iron or cobalt or some mixture of these different elements, you change one parameter temperature, and you always see a transition from, let's say, paramagnetic to ferromagnetic behavior. Now if I have, say, a line here in three dimensional space and I draw another line that corresponds to change in temperature, I will not intersect it. I have to do something very special to intersect that line. So in order that genetically I have a phase transition-- which is what my experimentalist friends tell me-- I know that I can only have one relevant direction, because the dimensionality of the basing of attraction is the dimensionality of the space minus however many relevant directions I have. And I've been told by experimentalists that they exchange one parameters, and generically they hit the surface. So that's part of the story. I better find a theory that, at the end of the day, when I do all of this, I find a fixed point that not only is well-behaved and is not a limit cycle, but also a fixed point that has one and only one relevant direction, if that's the physical system that I'm describing. Now of course, maybe that was for the superfluid, where they could only change temperature, and you have a situation where the magnet comes into play and they say, oh, actually we also have the magnetic field. And we really have to go to the space of zero field. And then if I expand my space of parameters here to include terms that break the symmetry, in that generalized space, I should only have two relevant directions. So it is kind of strange story, that all we are doing here is mathematics. But at the end of the day, we have to get the mathematics to have very specific properties that are dictated by very rough things about experiments. So this was kind of conceptually rich. So I'll let you digest that for a while. And next lecture, we will start actually doing this procedure and finding these kinds of [INAUDIBLE] relations. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 15_Series_Expansions_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So it's good to remind ourselves why we are doing what we are doing today. So we've seen that in a number of cases, we look at something like the coexistence line of gas and liquid that terminates at the critical point. And that in the vicinity of this critical point, we see various thermodynamic quantities and correlation functions that have properties that are independent of the materials that are considered. So this led to this concept of universality, and we were able to justify that by looking at properties of this statistical field. And we ended up with [INAUDIBLE] normalization group procedure, which classified the different universality classes according to the number of components of the order parameter, the thing that categorizes the coexisting phases, and the dimensionality of space. And that, in particular something like a liquid-gas system, would correspond to n equals to 1. Another example that would correspond to that would be, for example, a mixture of two metals in a binary alloy. You can have the different components mixed or phase separated from each other. So the normalization group method gave us the reason for there is this universality, but we found that calculating the exponent was a hard task coming from four dimensions. So the question is, given that these models or these numbers, the singularities here are universal, can we obtain them from a different perspective? And so let's say we are focused on this kind of liquid-gas system, which belong to this n equals to 1 universality class. So we can try to imagine the simplest model that we can try to solve that belongs to that universality class. And again, maybe thinking in terms of a binary alloy, something that has two possible values. In the liquid-gas, it could be cells that are either empty or filled with a particle. And so this binary model is this Ising model, where, at each side of a lattice, we put a variable that is minus plus 1. And so the idea is, again, if I take any one of these Ising models and I coarse-grain them, I will end up with the same statistical field, and it would have the same universality class. But if I make a sufficiently simple version of these models, maybe I can do something else and solve them in a manner that these critical behaviors can come up in an easier fashion. So let's say we are interested in two dimensions or three dimensions. I can draw two dimensions better. We draw a square lattice. On each side of it, we put one of these variables. And in order to capture this tendency that there is a possibility of coexistence where you have patches that are made of liquid or gas, or made of copper or zinc in our binary alloy, we need to have a tendency for things that are close to each other to be in the same state so that we can capture by a Hamiltonian, which is a sum over nearest neighbors, that gives an enhanced weight if they are parallel. And whatever that coupling is, once it is rescaled by kT, this combination, the energy divided by kT, we can parametrize by a dimensionless number k. And calculating the behavior of the system as a function of temperature, as the strength of the coupling in this simplified model, amounts to calculating the partition function as a function of a parameter k, which is a sum over, if I'm in a system that has n sites all to the n configurations, of a weight that tries to make variables that are next to each other to be in the same state. So clearly, what is captured here is a competition between energy-- energy would like everybody to be in the same state-- versus entropy. Entropy wants to have different states at each site. So you'll have a factor of 2 per site as opposed to everybody being aligned, which is essentially one state. And so that competition potentially could lead you to a phase transition between something that has coexistent at low temperature and something that is disordered at high temperatures. So now we have just recast the problem. Rather than having a partition function which was a functional integral over all configurations of the statistical field, I have to do this partition function, which is finite number of configurations, but it's still an interacting theory. I cannot independently move the variable at each site. So the question is, are there approaches by which I can calculate this? And one set of approaches is to start with a limit that I can solve and start expanding on that. And these expansions that are analogous to the perturbation expansions that we learned in 8.333 about interacting systems, in this case are usually called series expansions. One would perform them on a lattice. Now, I kind of hinted at two limits of the problem that we know exactly what is happening, and those lead to two different series expansions. One of them is the low-temperature expansions. And here the idea is that I know what the system is doing at T equals to 0. At T equals to 0, I have to find the configuration that minimizes the energy. T equals to 0 is also equivalent to k going to infinity. I have to find a state that maximizes this weight, and that's obviously the case where all of the spins are either plus or minus. So all sigma i equals to plus 1 or all sigma i equals to minus 1. But for the sake of doing one or the other, let's imagine that they are all plus and that I am solving the problem for the generalization of square cube to d-dimensional lattice. After all, we were doing d dimensions in general. So in d dimensions, each spin will have d neighbors. And so if I ask, what is the weight that I will get at 0 temperature, essentially each spin would have d factors of k. So the weight that I would get at T equals to 0-- let's call that Z of T equals to 0-- is simply e to the dNk. There are N sites. Each one of them has d neighbors in d dimensions. Of course, each one of them has 2d neighbours, but then I have to count the number of neighbors per site. So basically, this bond is shared by two neighbors, so half of it contributes to this site. And there are two possibilities. So the partition function at T equals to 0 is simply this. It's just the contribution of the two ground states. Now, we are interested in the limit where T goes to 0. So at T equals to 0, I know what is happening. Now, what I will get as I allow temperature to be larger, at some cost I am able to flip some of these spins from, say, the plus to minus. And I will get, in this case, islands of negative spin in sea of plus. And these islands will give a contribution that is going to be exponentially small in k and something to do with the bonds that I have broken. And by broken, I mean gone from the high energy, well, highly satisfied plus-plus state to the unsatisfied plus-minus state, in fact, 2k times number of broken bonds. So we can very easily write the first few terms in this series. So let's make a list of the excitation, or island that I can make, how many ways I can make this, which I will call degeneracy, and the number of broken bonds. So clearly, the simplest thing that I can do in a sea of pluses is to make one island, which is simply a site that has been previously plus and now has gone to become a minus. And this particular excitation can occur any one of N places if I have a lattice of size N. And I'm going to ignore any corrections that I may have from the edges. If you want you can do that, and that'd be more precise. But let's focus, essentially, on things that are proportional to N. Then how many bonds have I broken? You can see that in two dimensions, I have broken four bonds. In three dimensions, it would have been six. So essentially, it is twice the number of dimensions that is taking place. And so the contribution to energy is going to be e to the minus 2d. And I went from plus k to minus k, so in fact, I would have to multiply by 2k. Now, the next thing that I can do, the lowest energy excitation is to put two minuses that are next to each other in this sea of pluses. OK? Now, you can see that in two dimension, I can orient this pair along the x-direction or along the y-direction. And in general, there would be d directions, so I would have dN. Roughly, you would say that the number of bonds that you have broken is twice what you had before if the two were separate. But there is this thing in between that is now actually a satisfied bond. So you can convince yourself that, actually, if the two of them were separate, these two minus excitations, I would have 4d. But because I joined them, essentially I have 2d minus 1 from each one of them, and there's two of those. And of course, the next lowest excitation would indeed be to have two minuses that have no site, are totally separate from each other. And the contribution-- the number of these, well, this is something to count. The first one can any one of N places. The next one can be in any one of N minus 2d minus 1 places. It cannot be on the same one, and it cannot be in any of the 2d neighbors. And I should have double counting, so there is a factor of 2 here. And the cost of this is simply twice that, so this is 4d. So if I want to start writing a partition function expanded beyond what I have at 0 temperature, what I would have would be 2e to the dNk. There's zero temperature contribution. I would have 1 plus N e to the minus 4dk plus dN e to the minus 4 2d minus 1 k. And then from the other one that I have written, N N minus 2d minus 1 over 2e to the minus 8dk. And I can keep going and adding higher and higher order terms of the series. OK? OK. Once I have the partition function, I can start calculating the energy, which would be minus d log Z with respect to d beta. What is beta? Well, I said that this factor is something like 1 over kT, which is beta and J. So assuming that I have a fixed energy and I'm changing temperature, and the variations of k are reflecting the inverse temperature beta, then I can certainly multiply here a J and a J, which is a constant. And all I need to do is to take a J d by dk of log of the expression that I have above there. OK? So let's take the log of that expression. I have log of 2 plus dNk from here. And then I have log of 1 plus terms in a series that I have calculated perturbity. Now, log of 1 plus a small quantity I can always expand as a small quantity minus x squared over 2. You may worry whether or not, with N ultimately going to infinity, this is a small quantity. Neglecting that for the time being, if I look at this as log of 1 plus a small quantity, from here I would get N e to the minus 4dk plus dN e to the minus 4 2d minus 1 k plus N N minus 1 minus 2d over 2e to the minus 8dk, and so forth. But then remember that log of 1 plus x is x minus x squared over 2 plus x cubed over 3, and so forth. So if x is my small quantity, I will have a correction, which is minus x squared over 2. Let's just do it for this first term. I will get minus N squared over 2e to the minus 8dk, and there will be a whole bunch of higher-order terms. OK? Now, where am I going with this? Ultimately, I want to calculate various quantities that are extensive in the sense that they are proportional to N, and when I divide by N I will get something like energy per site. So if I do that, I have to divide this whole thing by N, I can see that here I have a term that is log T divided by N. In the N goes to infinity limit, it's a term that has order of 1 over N I can neglect. But all of these other terms are proportional to N. And when I divide by N, I can drop these factors of N. Well, except that I have a couple of terms that, if I had left by themselves, potentially could have been order of N squared. I have N squared over 2, but fortunately, you can see that it cancels out over there. Now, the reason this happens, and also the reason this series is legitimate, is because we already did something very similar to that in 8.333 when we were doing these cumulant expansions. And when we were doing these cumulant expansions, we obtained the series for, then, the grand partition function, which was a whole bunch of terms. But when we took the log, only the connected terms survived. And the connected terms were the things that, because they had a center of mass, were giving you a factor that was proportional to volume. And here you expect that ultimately everything that I will get here, if I calculate, let's say, log Z properly and then divide by N, it should be something that is order of 1. It shouldn't be order of N, or N cubed, or any of these other terms. So essentially, the purpose of all of these higher-order terms is really to subtract off things such as this that would arise in the counting when we look at islands and excitations that are disconnected. So I could have something right here, something right here. So this would be, essentially, a product of the contributions of these different islands. As long as they are disconnected from some term in the series, there would be a subtraction that would get rid of that and would ensure that these additional factors of N, because I can move each island over the entire lattice, would disappear. So I have this series, and now I can basically take the derivatives. So I have minus J, and I take d by dk of the various terms that have survived. The first one is d. So dJ is essentially the energy pair site that I would have at 0 temperature. I have strength J, deeper site. And then the excitations will start to reduce that and correct that. And so from here, I would get minus 4d e to the minus 4dk. From here, I would get minus 4 2d minus 1 d e to the minus 4 2d minus 1 k. The N-squared terms disappeared. So I would have 2d plus 1 over 2. But then it gets multiplied by 8d when I take a derivative. So I will get plus 4d 2d plus 1 e to the minus 8dk, and so forth in the series. OK? So these terms that are subtraction, you can see that you can really easily connect to these primary excitations. If you like, this term corresponds to taking two of these and colliding them with each other. They cannot be on top of each other. They cannot be next to each other. And so there is a subtraction because a number of configurations are not allowed. So this is, in some sense, a kind of expansion in these excitations and the interactions among these excitations. OK? Now presumably, what is happening is that at very low temperature, you are going to get these individual simple excitations with a little bit of interaction between them. As you increase the temperature, the size of these islands will get bigger and bigger. They start to merge into each other. Configurations that you would see will be big islands in a sea. And presumably, the size of these islands is some measure of the correlation length that you have in this low temperature state. Eventually, this correlation length will hit the size of the system. And then the starting point, that you had a sea of pluses and you're exciting around it, it is no longer valid. If you like, that vacuum state has become unstable, and this series, the way that we are constructing it, ceases to go beyond that point. OK? So let's take another step. If I've calculated the energy, I could also calculate the heat capacity, which is d by dT. Actually, I expect the heat capacity to be extensive also, so I'll divide by N. So I will look at the heat capacity per site. I know that the natural units of heat capacity are kB, which has dimensions of energy divided by temperature. So I divide by kB. So here I will have kBT. But then I notice that kBT, these are related inversely to K, capital K. It is J over K. So I can write this as J over K, and d by d-- 1 over K will give me a minus K squared. I will have a factor of 1 over J, and the 1 over J actually cancels this factor of J here. So all I need to do-- well, actually, let me write it, J d by dK of this E over N. So the expression that I have above, I have to take another derivative with respect to K multiplied by minus K squared over J. The J's cancel out, and so I will have a series that will be proportional to K squared. Good, I made everything dimensionless. And then the first term that will contribute will be 16 d squared e to the minus 4dK, from here. And from here, I will get 16 2d minus 1 squared d e to the minus 4 2d minus 1 K. And from here, I would get minus 32d 2d plus 1 e to the minus 8dK, and then so forth. OK? So you can see that this is something that is a kind of mechanical process, that in the '40s and '50s, without even the need for any computers, people could sit down and draw excitations, provide these terms, and go to higher and higher order terms in the series. Now, the reason that they were going to do this is that the expectation that if I look at this heat capacity as a function of something like temperature, which is e to the 1 over k, for example, then it starts at 0. And if we get corrections from these higher and higher order terms in the series-- I calculated the first few-- I don't know what will happen if I were to include higher and higher order terms. But my expectation is that, say, at least at some point when this expansion from low temperature breaks down, I will have a divergence, let's say, of the heat capacity. Or maybe I calculated susceptibility or some other quantity, and I expect to have some singularity. And maybe by looking and fitting more terms in the series, one can guess what the exponent and the location of the singularity is. So you can see that, actually in this case, the natural variable that I am expanding is not K, but e to the minus 2dK-- sorry, e to the minus 2K because each excitation will have a number of broken bonds that I have to calculate. Each one of them makes a contribution like this. So maybe we can call this our new variable. And we have a series that has a function of this or some other variable, has a singularity. Actually, you should be able to, first of all, convince yourself that the nature of the singularity is not modified by any mapping that is analytical at the point of the singularity. So if the heat capacity as a function of k has a particular divergence, as a function of u it will have exactly the same divergence. In particular, we expect that as u approaches some critical value, the kinds of functions that we are interested have a behavior, a singular behavior, that is something like 1 minus u over uC. Let's say for the heat capacity, I would expect some kind of a singularity such as this. If I had a pure function such as this and I constructed an expansion in u, what do I get? I will get 1 plus alpha u over uC plus alpha alpha plus 1 over 2 uC squared u squared, and so forth. It's just a binary series expanded. The l term in the series would be alpha alpha plus 1 alpha plus l minus 1 divided by l factorial-- that's actually 2 factorial-- uC to the power of l u to the l and so forth. OK? Now, typically, one of the ways that you look at series and decide whether it's a singular convergent series or what the behavior is is to look at the ratio of subsequent terms. So let's say that when I calculated my function C as a function of u, I constructed a series whose terms had coefficients that I will call al. OK? So here, if you had exactly this series, you would say that the ratio al divided by al minus 1 is essentially the ratio of one of these factors compared to the previous one. And every time you add one of these factors, you add a term that is like this alpha plus l minus 1, l factorial compared to l minus 1 factorial has a factor of l, and then you have uC. And I can rewrite this as uC inverse, l divided by l is 1, and then I have minus 1 minus alpha divided by l. OK? So a pure divergence of the form that I have over here would predict that the ratio of subsequent terms would be something like this. And presumably, if you go sufficiently high in the series, in order to reproduce this divergence you must have that form. So what you could do as a test is to plot, for your actual series, what the ratio of these terms is as a function of 1 over l. So you can start with the ratio of the second to first term. You would be at 1/2. Then you would go 1/3, then you would go 1/4, you would have 1/5, and basically you would have a set of points. And you would plot what the location is for the first term in the series, the next term in the series, the next term in the series, and so forth. And if you are lucky, you would be able to then pass a straight line at large distances in the series. And the intercept of that extrapolated line would be your inverse of the singular point. And the slope of this line would give you 1 minus alpha or minus 1 minus alpha. OK? So there is really, a priori, not much reason to hope that that will happen because you can say that if I look at the series that is A 1 minus u over uC to the minus alpha, plus I add an analytic part, which is sum p equals 1 to, say, 52 of bl u to the l. For any bl in this function has exactly the same singularity as the original one. And yet the first 52 terms in the series, because of this additional analytical form, have nothing to do with the eventual singularity. They're going to be massing that. So there is no reason for you to expect that this should work. But when people do this, and they find that, let's say, for d equals to 2 up to some jumping up and down, they get a reasonable straight line. And the exponent that they get would correspond very closely to the alpha of 0, which is the logarithmic divergence that one gets. So this is, for d equals to 2, and then they repeat it, let's say, for d equals to 3, they get a different set of points. OK? Maybe not perfectly on a straight line, but you can still extrapolate and conclude from that that you'll have an alpha which is roughly 0.11 when d equals to 3, which is quite good. So for some reason or other, these lattice models are kind of sufficiently simple that, in an appropriate expansion, they don't seem to give you that much of a problem. And so people have gone and calculated series, let's say, this was in '50s and '60s, just by drawing things on hand. And maybe some primitive computers, you can go to order of 20 terms in this series, and then extrapolate exponents for various quantities. OK? But it's not as simple as that. And the reason I calculated the first three terms for you was to show you that what I told you here was clearly a lie. Why is that? Because of the three terms that I explicitly calculated for you in that series, the third one is negative. Right? So clearly, if I were to plot that, I will get something over here. Right? So what's going gone there is a different issue. And people have developed kind of methodologies and ways to look at series and guess what is going on and yet continue to extract exponents. So one potential origin for alternating signs-- and any series that has a divergence such as the one that I have indicated for you will have, eventually, signs that need to be positive-- has to do with the following. Let's say if I take a series, which is 1 over 1 minus z/2. OK? This is a very nice series. It's 1 plus 0/2 z squared/4, z cubed/8. You could apply this ratio test to this series and conclude that you have a linear divergence. Now, suppose I multiply that by 1 over 1 plus z squared, which is a function that's perfectly well-behaved as a function of z. Yet if I multiply it here, I will get 1 minus z squared plus z to the fourth minus z to the sixth. And what it does is it kind of distorts what is happening over here. Actually, in this series you can see it kind of becomes ill-defined when z is of order of 1. It changes the signs, et cetera. But the function itself has a perfectly good singularity that appears at z equals to 2. And starting from an expansion from 0, there should be no problems along the line until you hit z of 2. What is the reason for these alternating signs? It is because you should be looking at the complex z plane. And in the complex z plane, you have poles at plus and minus i which are located closer to the origin than you have at 2. So basically, your series will start to have problems by the time you hit here, and that problem is reflected in the alternating behavior. It's also showing up over there. Yet it has nothing to do with going along the real axis and encountering the singularity that you are after. OK? So one thing that you can do is to say, well, who said I should use z as my variable? Maybe I can choose some other function v of z. OK? And then when I choose the appropriate thing, the singularity on the real axis will be pushed to v of 2. But maybe I chose appropriate function of v of z such that the other singularities are pushed very far away so that the first singularity that I encounter is over here. OK? And it turns out that if you take this series over here and rather than working with e to the minus k, we recast things in terms of tanh K-- let's call that v-- which is e to the K plus e to the minus K-- well actually, tanh K I can also write as e to the 2K minus 1 e to the 2K plus 1. I mean, it's just a transformation. So I can replace e to the minus 2K with some function v, substitute for u in that series, and I will have a different function as an expansion in powers of v. And once people do that, same thing happens as here. You'll find a function that all of its terms are, in fact, positive, and the things that I mentioned to you over here were applied. After such transformation, you get very nice behaviors. OK? So there seems to be some guesswork into finding the appropriate transformation. There are other methods for dealing with series and extracting singularities called Pade approximants, et cetera, which I won't go into. But there are kind of, again, clever mathematical tricks for extracting singularity out of series such as this. OK? So I'll tell you shortly why this tanh K is really a good expansion factor. It turns out that for Ising models, it's actually the right expansion factor if we go to the other limit of high temperatures. OK? So basically, now at T going to infinity, you would say that sigma i is minus or plus 1 with equal probability. As T goes to infinity, this factor that encodes the tendency of spins to be next to each other has been scaled to 0, so I know exactly what is going on at infinite temperature. Basically, at each site, I have an independent variable that is decoupled from everything else. So I can start expanding around that for, say, the partition function. Let's think of it for a general spin system. So I will write it as a trace over, let's say, if I have Potts model rather than two values, I would have K values of something like e to the minus beta H, again, trying to be reasonably general. And the idea is that as you go to infinite temperature, beta goes to 0, and this function you can expand in a series 1 minus beta H plus beta squared H squared over 2, and so forth. Now, the trace of 1 is essentially summing over all possible states. Let's say the two states that you would have for the Ising model or however many that you have for Potts models at each site independently. So that can give me some partition function that I will call Z0. It is simply 2 to the n for the Ising model. But once I factor that, you can see that the rest of the terms in the series can be regarded as expectation values of this Hamiltonian with respect to this weight in which all of the degrees of freedom are treated as independent, unconstrained variables. And of course, the thing that I'm interested is log of the partition function. And so that will give me log of Z0, and then I have the log of this series. And then you can see that that series is a generating function for the moments of the Hamiltonian. So its log will be the generating function for the cumulant, so H to the l 0, the cumulant. So the variance at the second order and appropriate cumulant at higher orders. OK? So let's try to calculate this for the Ising model, where my minus beta H is K sum over i, j sigma i sigma j. OK? Then at the lowest order, what do I get? The average of beta H is K sum over i, j average of sigma i sigma j with this zeroed weight. But as I emphasized, at zeroed weight, every site independently can be plus or minus. Because of the independence, I can do this. And then since each site has equal probability to be plus or minus, its average is 0. So basically, this will be 0. OK? So the first thing that can happen in that series-- if I go to the next order. So at next order, beta H squared would involve K squared sum over i, j K, l sigma i sigma j sigma K sigma l. And I have to take an average of this, which means that I have to take an average of something like this. OK. And you would say, well, again, everything is 0. Well, there is one case where it won't be 0-- if these two pairs are identical. Right? So this is going to give me K squared sum over pair i, j being the same as K, l. Then I will get, essentially, sigma i squared sigma j squared. Sigma i squared is 1. Sigma j squared is 1. So basically, I will get 1. And this is going to give me K squared times the number of bonds. OK? So you can see that I can start thinking of this already graphically. Because what I did over here, I said that on my lattice this sum says you pick one sigma i sigma j. If I were to pick the other sigma i sigma j over here, the average would be 0. I am forced to put two of them on top of each other. If I go to three, there is no way that I can draw a diagram that involves three pairs in which every single site occurs twice, which is what I need. Because a single site appearing by itself or three times will give me sigma i cubed is the same as sigma i. It will average to 0. So the next thing that I can do is to go to level four. At the level of four, I can certainly do something like this. I can put all four of them on top of each other, and then I get a K to the fourth contribution. Or I could put a pair here, and if they're here, for log Z that would be unacceptable because that will get subtracted out when I calculate the variance. It's not a connected piece. It's a disconnected piece. But I could have something like this, two of them turned like this. So that's four. But really, the one that is nontrivial and interesting is when I do something like this, like a square. So I go here sigma 1 sigma 2, sigma 2 sigma 3. That sigma 2 has been repeated twice and becomes sigma 2 squared and goes away. Sigma 3 sigma 4, sigma 3 repeated twice, sigma 4 repeated twice, sigma 1 repeated twice, [INAUDIBLE]. OK? So you can see that this kind of expansion will naturally lead you into an expansion in terms of loops on a lattice. So the natural form of high temperature expansions are these closed strings or loops, if you like, that you have to draw on the lattice. Now, it's also clear that the thing that goes between two sites, that I'm indicating by K, in all cases is likely to be repeated by putting more and more things on top of each other without modifying the effect. So I can go here to 4 and things like actually 3 and things like that. So basically, you can see that I should really do a summation over the contribution of 2, 4, et cetera all on top of each other, or 1, 3, 5 on top of each other, and call them new variables. So when we were doing the cluster expansion for particles interacting, we encountered this thing that we thought v was a good variable to expand it. But then because of these repeats, we decided that e to the minus beta v minus 1 was a good variable to expand it. So a similar thing happens here. And for the Ising model, it is a very natural thing to recast this series in a slightly different way. You see that the contribution of each bond to the partition function, and by a bond I mean a pair of neighboring sites, is a factor e to the K sigma i sigma j. OK? Now, since we are dealing with binary variables, this product, sigma i sigma j, can only take two values. It's either plus K or minus K depending on where things are aligned or misaligned. So I can indicate the binary nature of this in the following fashion. I can write this as e to the K plus e to the minus K over 2 plus sigma i sigma j e to the K minus e to the minus K over 2. So that when I'm dealing with sigma sigma being plus, I add those two factors. e to the minus K's disappear. I will get e to the K. When I'm dealing with this thing to the minus, the e to the K's disappear, and I will get e to the minus K. So it's correct rewriting of that factor. The first term you, of course, recognize as the hyperbolic cosine of K, the second one as the hyperbolic sine of K. And so I can write the whole thing as hyperbolic cosine 1 plus hyperbolic tanh of K sigma i sigma j. OK? So this tanh is really same thing as here. It's the high-temperature expansion variable. As K goes to 0 at high temperature, tanh K also goes to 0. And it turns out that a much nicer variable to expand is this quantity tanh K. And so that I don't have to repeat it throughout, I will give it the symbol t. So small t stands not for reduced temperature anymore, but for hyperbolic tanh of K. So my partition function now, Z-- maybe I'll go to another page. So my partition function is a sum over the 2 to the N binary variables e to the K sigma i sigma j sum over all bonds. I can write that as a product of these exponential factors over [INAUDIBLE]. Each of these exponential factors I can write as cosh K 1 plus t sigma i sigma j. All the factors of cosh K I will take to the outside. So I will get cosh K raised to the power of the number of bonds that I have in my lattice because each bond will contribute one of these factors. And then I have this sum over sigma i product over bonds. So this is the product of 1 plus t factors. So for each-- maybe I'll do it over here. So for each i, j, I have to pick one of these factors. I can either pick 1, nothing, or I can pick a factor of t sigma i sigma j. OK? So the first term in this series-- since it's a series in powers of t, the first term in the series is to pick 1 everywhere. The next term is to pick one factor at some point. But then when I pick that factor, that term in the series, I have to sum over sigma i. And when I sum over sigma i, since this sigma i can be plus or minus with equal probability, it will give me 0. OK? So I cannot leave this sigma i by itself. So maybe I will pick another higher-order term in the series that has a t, a sigma i that would make this into a sigma i squared, and then I will have a sigma K here. OK? Now, note it was kind of similar to what I was doing here. But here I could pick as many bonds as I like on as many factors of K. Now what has happened here is, effectively, I have only two choices. One choice is having gone many, many times, so summing all of the terms that had 2, 4, et cetera. That's what gives you the cosh K. Or including something like this, sum of 1, 3, 5, et cetera. That's what gives you tanh K. But the good thing is that it's really now a binary choice. You either draw one line, or you don't draw anything. OK? So again, your first choice is to somehow complete the series by drawing something like this. And quite generically-- OK, so after that has happened, then this is sigma i squared. This is sigma j squared. These are all-- they have gone to 1. And then you do the sum over sigma i, you will get a factor of 2. So the answer is going to be 2 to the number of sites, N, cosh K to the power of the number of bonds. And then I would have a series, which is the sum over all graphs with even number of bonds per site like here. So I either have 0 bond going here, or I can have two bonds. I could very well have something like this, four bonds. That doesn't violate anything. So all I need to ensure in order that sigma i does not average to 0 is that I have an even number per site. And then the contribution of the graph is t to the number of bonds in the graph. And at this stage when I'm calculating a partition function, there is no reason why I could not have disconnected graphs. For the partition function, there is no problem. Presumably, when I take the log, the disconnected pieces will go away. OK? Yes? AUDIENCE: Where does the 2 to the N come from again? PROFESSOR: OK. So at each site, I have to sum over sigma i. So sigma i is either minus 1 or plus 1. What I'm doing is sum over sigma i sigma i to some power. And this is either going to give me 2 or 0 depending on whether P is even or P is odd. All right? OK? So you can try to calculate general terms for this series. Let's say we go to hypercubic lattice, which is what we were doing before. The number of bonds per site is d. So this, for the hypercubic lattice, the number of bonds will be dN. You could do this calculation for a triangular lattice. You don't have to stick with FCC lattice. You don't have to stick with these hypercubic lattices. The first diagram that you can create is always the square. OK? And in d dimensions, one leg has a choice of d direction. The next one would be d minus 1. So this would be d d minus 1 over 2 t to the fourth. But you could start it from any site on the lattice so you would have something like this. The next term that you would have in the series is something that involves, let's say, six bonds. So the next term will be N t to the 6. And I think I sometimes convince myself that the numerical factor was something like this, but doesn't matter. You could calculate out of this. Yes? AUDIENCE: What if we have diagrams of order of t squared, just [INAUDIBLE] there and back? PROFESSOR: OK. Where would I get the t squared from here? OK? So from this bond, I have this factor, 1 plus t sigma i sigma j. There is no t squared. I would have had K squared, K to the fourth, et cetera. But I re-summed all of them into hyperbolic cosine and the hyperbolic sine. So this-- AUDIENCE: So [INAUDIBLE] taking this product along all the bonds, you can kind of go along the same bond. PROFESSOR: We already summed all of those things together into this factor t. AUDIENCE: OK. PROFESSOR: Yeah? OK? Yeah, it's good. And that's why this tanh is such a nice variable. OK? So there is actually the nicer series to work with in terms of trying to extract exponent is this high-temperature series in terms of these new diagrams, et cetera. But I'm not going to be doing diagrammatics. What I will be using this high-temperature series is the following. One, to show that in a few minutes we can use it to exactly solve the one-dimensional Ising model and gain a physical understanding of what's happening, and 2, to re-derive Gaussian model. Turns out that there is a close connection between all of these loops that you can draw on a lattice through some kind of a path integral way of thinking about it with the Gaussian model. And that we actually will use as a stepping stone towards where we are really headed, which is the exact solution of the 2D Ising model. OK? So the 1D Ising model. And actually, the method is sufficiently powerful that we can compare and contrast two cases, one when you have open chain. So this is a system that is composed of sites 1, 2, 3, 4, N minus 1, N. On each one of them I have an Ising variable. And if I follow my nose, it's a Z is 2 to the number of sites cosh K to the power of the number of bonds. Actually, clearly with open systems, the number of bonds is 1 less than the number of sites. So I can be extremely precise. It is N minus 1. And then I have to draw all graphs that I can on this lattice that have an even number of bonds emanating from each site. Find one. [LAUGHTER] OK. Since you won't have one, that stands. So you can take the log of that. You have this free energy, whatever you like. We can't. 1 is essentially not the zeroth order term in this series. Yes? That was the question. OK. All right? You can use the same thing, same technology, to calculate spin-spin correlation. So I pick spins m and n on this chain. Let's say this is spin m here, and somewhere here I put spin n. And I want to know the average of that quantity. What am I supposed to do? I'm supposed to sum over all configurations with the weight sigma i sigma i plus 1 product over all-- well, actually, we can be general with this. Let's call it product over all bonds, which, in this case, are near neighbors, sigma i sigma j. That weight I have to multiply by sigma m sigma n. And then I have to divide by the partition function so that this is appropriately weighted. OK? So I can do precisely the same decomposition over here. So I will have 2 to the N cosh K to the number of bonds. In fact, this I can do in any dimensions. It's not really what I would have only in one dimension. And the partition function, you have seen, is the sum over all graphs, where t to the number of bonds in graph is called g. Now I can do the same kind of expansion that I did over here. If I multiply with an additional sigma m sigma n, it is just like I have already a sigma m and a sigma n somewhere. And when I sum over sigmas, I have to make sure that these things don't average to 0. So what I need to do is to draw graphs that have an even number at all sites and an odd number at these two sites. All right? So this is sum over g with even number except on m and n, where you have to have an odd number, and t is subset of graphs. OK? So if I do this for the 1D model, sigma m sigma n, I have to draw graphs that have, essentially, an odd number. Essentially, sigma m and sigma n should be the origins or ends of lines. And clearly, I can draw a graph that connects these two. And so what I will get is t to the number of steps that I have to make between the two of them. The rest of the it is going to be the same, 2 to the N cosh K to the N minus 1 in the numerator and denominator, they cancel each other. OK? So you can see explicitly that this is a function that decays since t is less than 1 as I go further and further out. And that it is a pure exponential. So you remember that we said in general you would have a power law line in front that would have an exponent [? theta. ?] And when we did r of g, I told you, well, [? theta ?] came out to be 1 such that you have pure exponential. Well, here is the proof. And furthermore, from this we see that c is minus 1 over log of the hyperbolic tanh of K. And if you expand that, you will find that as K goes to infinity, it has precisely that e to the 2K divergence that we had calculated. So you can see that calculating things using this graphical method is very simple. And essentially, the interpretation of t is that it is the fidelity with which information goes from one site to the next site. And so the further away you go every time, you lose a factor of t in how sure you are about the nature of where you started with. And so as you go further, you have this exponential decay. OK? And the other thing that we can do at no cost is periodic boundary conditions. So we take, again, our spins 1, 2, 3, except that we then bend it such that the last one comes and gets connected to the first one. OK? So what's the partition function in this case? It is 2 to the N. The number of bonds, in this case, is exactly the same as the number of sites. It's one more than before, so I get to cosh K raised to the power of N. And then is it just one? There is one thing that goes all the way around, so I have 1 plus t to the N. So this is an exponentially small correction as we go further and further out. You can kind of regard that as some finite-size interaction. I can similarly calculate sigma m sigma n, the expectation value. OK? And in the denominator from the partition function, I have this factor of 1 to the t to the N. In the numerator, again, you should be able to see two graphs. We can either connect this way or we can connect that way. So you'll have t to the power of n minus m, but you don't know which angle is the smaller one, so you'll have to also include the other one. OK? So again, if we take N to infinity and these two sufficiently close, you can see that all of these finite-size effects, boundary effects, et cetera disappear. But this is, again, a toy model in which to think about what the effects of boundaries is, et cetera. You can see how nicely this graphical method can enable you to calculate things very rapidly. We'll see that, again, it provides the right tools conceptually to think about what happens in higher dimensions. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 2_Lec_1_continued_The_LandauGinzburg_Approach_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start, so last lecture, we were talking about elasticity as an example of a field theory-- statistical field theory. And I ended the class by not giving a very good answer for a question was asked. So let's go back and revisit it. So the idea that we had was that maybe we have some kind of a lattice. And we make a distortion through a set of vectors in however many dimensional space we are. And in the continuum format, we basically regarded this u as being an average over some neighborhood. But before going to do that, we said that the energy cost of the distortion at the quadratic level-- of course, we can expand to go to a higher level-- was the sum over all pairs of positions, all pairs of components, alpha and beta data running from one to a higher whatever the dimensionality of space is u alpha at r or u beta at r prime. The components alphabet beta of this vector u at these two different locations-- let's say r and r prime here. And then there was some kind of an object that correlated the changes here and here maybe obtained as a second derivative of a potential energy as a function of the entirety of the coordinates. And this in principle or a general function of all of these distortions depends on all pairs of variables. But then we said that because we are dealing with a lattice and one pair of coordinates in the lattice is the same as another pair of coordinates that have the same spatial orientations that this function merely is a function of the separation r minus r prime, which of course can be a vector in this lattice. And the fact that it has this form then allows us to simplify this quadratic form rather than be a sum over pairs and squared, if you like, into a sum over one coordinate which is the wave vector obtained by Fourier transform. So once we Fourier transform, that V can be written as a sum over just one set of K vectors, of course appropriately discretized depending on the overall size of the system and confined to a [INAUDIBLE] zone that is determined by the type of wavelengths that lattice structure allows. And Then this becomes your alpha tilde. The Fourier transform's evaluated at K and the other point, which is minus K that if I write it again as K is really this. And this entity, once we Fourier transform it, becomes a function of the wave number K. Now, there is one statement that is correct, which is that if I have to take the whole lattice and move it, then essentially there would be no cost. And moving the entire lattice only changes the Fourier component that corresponds to K equals to 0-- translation of everything. So you know for sure that this K alpha beta at K equals to 0 has to be 0. Now, we did a little bit more than that, then. We said, let's look not only at K equals to 0, but for small k. And then for small k, we are allowed to make an expansion of this. And this is the expansion K squared, K to the fourth, et cetera, that we hope to terminate at low values of K. Now, that itself is an assumption. That's part of the locality that i stated that I can make an expansion as a function of K. After all, let's say K to the 1/2 is a function that goes to 0 at K equals to 0. But it is non analytic. Turns out that in order to generate that kind of non analyticity in this, this function in real as a function of separation should decay very slow. Sometimes something like a Coulomb interaction that is very long range will give you that kind of singularity. But things that are really just interacting within some neighborhood or even if they are long range, if the action falls of sufficiently rapidly-- and that's one of the things that we will discuss later on-- precisely what kind of potential allow you to make an expansion that is analytical in powers of k and hence consistent with this idea of locality. Let's also ignore cases where you don't have inversion symmetry. So we don't need to start and worry about anything that is linear in K. So the first thing that I can appear is quadratic in K. Now, when we have things that are quadratic in K, I have to make an object that has two indices-- alpha and beta. And, well, how can I do that? Well, the kinds of things that I mentioned last time, we could a term that is K squared. And then we have a delta alpha beta. If I insert that over here, what do I get? I get K squared delta alpha beta will give me U tilde squared. So that's the term that is in an isotropic solid identified with the shear modulus. Again, if the system is isotropic, I don't know any distance apart from any other distance, any direction, any other direction. I can still make another tensor between this is alpha and beta by multiplying k alpha and k beta. If I then multiply this with this, I will get K.U-- the dot product squared. So that's the other term that in an isotropic solid was identified with nu plus lambda over 2. Now, as long as any direction for K in this lattice is the same, those are really the only terms that you can write down. But suppose that your system was unisotropic, such as for example a rectangular lattice. Or let's imagine we put a whole bunch of coins here to emphasize that rather than something like a square lattice, we have something that is very rectangular. In this case, we can see that the x and y directions are not the same. And there is no reason why this K squared here could not have been separately kx squared plus some other number ky squared. And hence, the type of elasticity that you would have would no longer depend on just two numbers. But, say, over here with three, in fact there would be more. And precisely how many independent elastic constants are given depend on the point group symmetry-- the way that your lattice is structured over here. But that's another story that I don't want to get into. But last time, I was rather careless and I said that just rotational symmetry will give you this k squared, k alpha, k beta. I have to be more precise about that. So this is what's happened. Now, given that you have an isotropic material, then the conclusion is that the potential energy will have two types of terms. There is the type of term that goes with k squared u tilde squared. And then there's a type of term that goes with u plus lambda over 2k tilde k times u tilde squared. Now you can see that immediately there is a distinction between two types of distortion. Because if I select my wave vector K here, any distortion that is orthogonal to that then will not get the contribution from this. And so the cost of those so called transverse distortions only come from u. Whereas if I make a distortion that is along k, it will get contribution from both of those two terms. And hence, the cost of those longitudinal distortions would be different. So when we go back to our story of how does the frequency depend on a whole bunch of ks, you can see that when we go to the small k limit in an isotropic material, it you've established that there will be longitudinal mode. And we'll some kind of longitudinal sound velocity. And in three dimensions, there will be two branches of the transverse mode that fall on top of each-other. As I said, once you go to further larger K, then you can include all kinds of other things. In fact, the types of things that you can include at fourth order, sixth order, et cetera again are constraints somewhat by the symmetry. But their number proliferates. And eventually you get all kinds of things that could potentially describe how these modes depend. And those things are dependent on your potential I don't know too much about. And why was this relevant to what we were discussing? We said that basically, once you go to some temperature other than zero, you start putting energy into the modes. And how far in frequency you can go given your temperature, typically, the maximum frequency at the particular temperature is of the order of kt over hr. So that if you are at very high temperature, all of the r1 equals [INAUDIBLE] and all of the vibrations are relevant. But if you go to low temperatures, eventually you hit the regime where only these long wavelength, low frequency modes are excited. And your heat capacity was really proportional to how many modes are excited. And from here, you can see since omega goes like T, either the case, for longitudinal or the k for transverse has to be proportional to T the maximum 1. Larger values of k would correspond to frequencies that are not excited. So all of the nodes that are down here are going to be excited. How many of them are there? Up to from 0 to this k max that is of the order of kt over h bar and some sound velocity. So you would say that the heat capacity which is proportional to the number of oscillators that you can have, you kt-- kb per oscillator. The number of oscillators was roughly volume times kt h bar v, cubed if you are in three dimensions. You saw that it was one in one dimension. So in general, it will be something like d. Of course, here, I was kind of not very precise because I really have to separate out the contribution of longitudinal modes and transverse modes, et cetera. So ultimately, the amplitude that I have will depend on a whole bunch of things. But the functional form is this universal t to the d. So the lesson that we would like to get from this simple, well known example is that sometimes you get results that are independent in form for very different materials. I will use the term universality. And the origin of that universality is because we are dealing with phenomena that involve collective behavior. As k becomes small, you're dealing with large wavelengths encompassing the collective motion of lots of things that are vibrating together. And so this kind of statistical averaging over lots of things that are collectively working together allows a lot of the details to in some sense be washed out and gives you a universal form just like adding random variables will give you a Gaussian if you add sufficient many of them together. It's that kind of phenomenon. Typically, we will be dealing with these kinds of relationships between quantities that have dimensions. Heat capacity has dimensions. Temperature has dimensions. So the amplitude will have to have things that have dimensions that come from the material that you are dealing with. So the thing that is universal is typically the exponents that appear in these functional forms. So those are the kinds of things that we would like to capture in different contexts. Now, as I said, this was supposed to be just an illustration of the typical approach. The phenomenon that we will be grappling with for a large part of this course has to do with face transactions. And again, this is a manifest result of having interaction among degrees of freedom that causes them to collectively behave separately differently from how a single individual would behave. And the example that we discussed in the previous course and is familiar is of course that of ice, water, steam. So here, we can look at things from two different perspectives. One perspective, let's say we start by-- actually, let's start with the perspective in which I show you pressure and temperature. And as a function of pressure and temperature, low temperatures and high pressures would respond to having some kind of a-- actually, let me not incur the wrath of people who know the slopes of these curves precisely. So there is at low temperatures, you have ice. And then the more interesting part is that you have at high temperatures and low pressures gas. And then at immediate values, you have, of course, the liquid. And the other perspective to look at this is to look at isoterms of the system for pressure versus velocity. So basically, what I'm doing is I'm staking at some temperature and calculating the behavior along that isoterm of how pressure and volume of the system are related. So then you are far out to the right, you'll have behavior that is releasing of an ideal gas. You have PV be proportional to the number of particles in temperature. As you lower the temperatures and you hit this critical isoterm at TC, the shape of the curve gets modified to something like this. And if you're looking at things at isoterms at an even lower temperature, you encounter a discontinuity. And that discontinuity is manifested in a coexistent interval between the liquid and gas. So isoterms for T and less than TC has this characteristic form. So here, I have not bothered to go along the way down to the case of the solid. Because our focus is going to be mostly as to what happens at this critical point-- TC and TC, where the distinction between liquid and gas disappears. So there is here the first case of a transition between different types of material that is manifested here, the solid line indicating a discontinuity in various thermodynamic properties. Discontinuity here being, say, the density as a function of pressure. Rather than having a nice curve, you have a discontinuity. In these curves, isoterms are suddenly discontinuous. And the question that we posed last time last semester was that essentially, all the properties of the system, the thermodynamic properties, I should be able to obtain through the partition function, log of the partition function, which involves an integral, let's say, over all of the coordinates and momenta of some kind of energy. And this energy in the part about momenta is not particularly important. Let's just get rid of that. And the part about the coordinates involves some kind of potential interaction between pairs of particles. That is not that difficult. Maybe particles are slightly attracted to each other when they're close enough. And they have a hard core. But somehow, after I do this calculation bunch of integrals, all of them are perfectly well behaved. There is no divergence. As The range of integration goes to zero and infinity, I get this discontinuity. And the question of how that appears is something that clearly is a consequence of interactions. If we didn't have interactions, we would have ideal gas behavior. And maybe this place is really a better place to go and try to figure out what's going on than any other place in the phase diagram. The reason for that is in the vicinity of this point, we can see that the difference between liquid and gas is gradually disappearing. So in some sense, we have a small parameter. There's some small difference that appears here. And so maybe the idea that we can start with some phase that we understand fully and then perturb it and see how the singularity appears is a good idea. I will justify for you why that's the case and maybe why we can even construct a statistical field theory here. But the reason it is also interesting is that there is a lot of experimental evidence as to why you should be doing this. We discussed the phase diagrams. And we noticed that a interesting feature of these phase diagrams is that below TC when the liquid and gas first manifest themselves as different phases , there is a coexistence interval. And this coexistence interval is bounded on two sides by the gas and liquid volumes or alternatively densities. Now, the interesting experimental fact is that when you go and observe the shape of this curve, for the whole bunch of different gases-- here, you have neon, argon, krypton, xenon, oxygen, carbon dioxide, methane, things that are very different-- and scale them appropriately so that all of the vertical axes comes to one. So you divide P by PC. You appropriately normalize the horizontal axis so the maximum is at 1. You see that all of the curves you get from very, very different systems after you just do this simple scaling fall right on top of each-other. Now, clearly something like carbon dioxide and neon, they have very different inter atomic potentials that I have to put in this calculation. Yet despite that, there has emerged some kind of a universal law. And so I should be able to describe that. Why is that happening? Now I'll try to convince you that what I need to do is similar to what I did there. I need to construct a statistical field theory. But we said that statistical field theories rely on averaging over many things. So why is that justified in this context? So there is a phenomenon called critical opalescence. Let's look at it here. Also, this serves to show something that people sometimes don't believe, which is that since our experience comes from no pressures, where we cool the gas and it goes to a liquid or heat it and liquid goes to a gas and know that these are different things, it's kind of-- they find it hard to believe that if I repeat the same thing at high pressure, there is no difference between liquid and gas. They are really the same thing. And so this is an experiment that is done going more or less around here where the critical point is. And you start initially at the low temperatures side where you can see that at the bottom of your wire, you have a liquid. And a meniscus separates it nicely from a gas. As you heat it up, you approach the critical temperature. Well, the liquid expands and keeps going up. And once you hit TC and beyond, that well, one is the liquid and which one's the gas? You can't tell the difference anymore, right? There is a variation in density. Because of gravity, there's more density down here up there. OK, now what happens when we start to cool this? Ah, what happened? That's called critical opalescence. You can't see through that. And why? Because there are all of these fluctuations that you can now see. Right at TC when the thing became black, there were so many fluctuations covering so different length scales that light could not get through. But now gradually, the size of the fluctuations which exists both in the liquid and in the gas are becoming small but still quite visible. So clearly, despite the fact that whatever atoms and molecules you have over here have very short range interactions, somehow in the vicinity of this point, they decided to move together and have these collective fluctuations. So that's why we should be able to do a statistical theory. And that's why there is a hope that once we have done that, because the important fluctuations are covering so many large numbers of atoms or molecules, we should be able to explain what's going on over here. So that's the task that we set ourselves. Anything about this before I close the video? OK. Yes? AUDIENCE: There are Gaussians that when rescaled, their plots still comply to that? PROFESSOR: No. No. In the vacinity-- so maybe the range of the things that collapse on top of each-other is small. Maybe it is so small that you have to do things at low pressure differences in order to see that. But as far as we know, if you have sufficient high resolution, everything will collapse on top of that [INAUDIBLE]. And what is more interesting than that-- that is not confined to gases. If I had done an experiment that involved mixing some protein that has a dense phase and a dilute phase and I went to the point where there is a separation between dense and dilute and I plotted that, that would also fall exactly on top of this [INAUDIBLE]. So it's not even gases. It's everything. And so-- so yes? AUDIENCE: I heard you built a system where some [INAUDIBLE] damages? PROFESSOR: Yes. And we will discuss those too. AUDIENCE: And the exponents will change? PROFESSOR: Yes. AUDIENCE: So it's [INAUDIBLE]. PROFESSOR: You may anticipate from that T to the D over there, the dimension is an important point, yes. OK? Anything else? All right. I'm going to rather than construct this theory, this statistical filter for the case of this liquid gas system, I'm going to do it for the case of the ferromagnet just to emphasize this fact that this result, this set of results spans much more than simple liquid gas phenomena-- a lot of different things-- and secondly, because writing it for the case of ferromagnet is much simpler because of the inherent symmetries that I will describe to you shortly. So what is the phenomenon that I would like to describe? What's the phase transition in the case of the ferromagnet like a piece of iron? Where one axis that is always of interest to us is temperature. And if I have a piece of iron or nickel or some other material, out here at high temperatures, it is a paramagnet. And there is a critical temperature TC below which it becomes a ferromagnet, which means that it has some permanent magnetization. And actually, the reason I drew the second axis is that a very nice way to examine the discontinuities is to put an external magnetic field and see what happens. Because if you put an external magnetic field like you put your piece of magnet enclosed in some parent [INAUDIBLE] loop so that you have field in one direction or the other direction, then as you change this [INAUDIBLE] of the field on the high temperature side, what you'll find is that if I plot where the magnetization is pointing, well, if you put the magnetic field on top of a paramagnet, it also magnetizes. It does pull in the direction of the field. The amount of magnetization that you have in the system when you are on the high temperatures side looks something like this. At very high fields, all of your spins are aligned with the field. As you low the field because of entropy and thermal fluctuations, they start to move around. They have too much fluctuation when you're in the phase that is a paramagment. And it then when the field goes to 0, the magnetization reverses itself. And you will get a structure such as this. So this is the behavioral of magnetization as a function of the field if you go along the path such as this which corresponds to the paramagnet. Now, what happens if I do the same thing but I do it along a looped route such as here? So again, when you are out here at high magnetic field, not much happens. But when you hit 0, then you have a piece of magnet. It has some magnetization. So basically, it goes to some particular value. There is hysteresis. So let's imagine that you start with a system down here and then reduce the magnetic field. And you would be getting a curve such as this. So in some sense, right at H equals to 0 when you are 4T less than TC, there is a discontinuity. You don't know whether you are one side or the other side. And in fact, these curves are really the same as the isoterms that we had for the liquid gas system if I were to turn them around by 90 degrees. The paramagnetic curve looks very much topologically like what we have for their isoterm at high temperatures where the discontinuity that we have for the magnetization of the ferromagnet looks like the isoterm that we would have at low temperatures. And indeed, separating these two, there will be some kind of a critical isoterm. So if I were to go exactly down here, you see what I would get is a curve that is kind of like this-- comes and hogs the vertical axis and then does that, which is in the sense that we were discussing before identical to the curve that you have for the isoterm of the liquid gas coexistence. What do I mean? I could do the same type of collapse of data that I showed you before that I was doing for the coexistence curve. I can do that for these inverted coexistence curves. I can do the same collapse for this critical isoterm of the ferromagnet. And it would fall on top of the critical isoterm that I would have for neon, argon, krypton, anything else. So that is that kind of overall universality. Now-- yes? AUDIENCE: In this [INAUDIBLE], why didn't you draw the full loop of hysteresis? PROFESSOR: Because I'm interested at this point with equilibrium phenomena. Hysteresis is a non equilibrium thing. So depending on how rapid you cool things or not cool things, you will get larger curves-- larger hysteresis curves. AUDIENCE: So what would it have been if we had high-- wasn't [INAUDIBLE] fields that were slowly reduce it to 0? And then go a little bit beyond and then just stay in that state for a long time? Will the-- PROFESSOR: Yes. If you wait sufficiently long time, which is the definition of equilibrium, you would follow the curve that I have drawn. There would be-- so the size of the hysteresis loop will go down as the amount of time that you wait. Except that in order to see this, you may have to wait longer than the age of the universe. I don't know. But in principle, that's what's going to happen. AUDIENCE: I have a question here. PROFESSOR: Yes AUDIENCE: When you create the flat line for temperature below the critical one-- PROFESSOR: Yes AUDIENCE: --are you essentially doing sort of a Maxwell construction again? Or is that too much even for this? PROFESSOR: No. I mean, I haven't done any theoretical work. At this point, I'm just giving you observation. Once we start to give theory, there is a type of theory for the magnet that is the analog of Maxwell's construction that you would do for the case of the liquid gas. Indeed, you will have that as the first problem set. So you can figure it out for yourself. Anything else? OK. So I kind of emphasized that functional forms are the things that are universal. And so in the context of the magnet, it is more clear how to characterize these functional forms. So one of the things that we have over here is that if I plot the true equilibrium magnetization as a function of temperature for fields that are 0-- so H equals to 0. If I start along the 0 field line, then all the way up to TC, magnetization at high temperatures is of course 0. You're dealing with a paramagnet. By definition, it has no magnetization. When you go below TC, you'll have a system that is spontaneously magnetized. Again, exactly what that means is it needs a little bit of clarification. Because it does depend on the direction of your field. The magnitude of it is well defined. If I go from H to minus H, the sine of it could potentially change. But the magnitude has a form such as this. It is again very similar, if you like, to half of that co existence curve that we had for the liquid gas. And we look at the behavior that you have in this vicinity. And we find that is well-described by a power law. So we say that M as T goes to TC for H equals to 0 is characterized or is proportional to TC minus T to an exponent that is given the symbol beta. Now, sometimes you can't write this as TC minus T to the-- make it dimension. It doesn't matter. It's the same exponent. Sometimes in order not to have to write all of this thing again and again, you call this small T. It's just that reduced temperature from the critical point. And so basically, what we have is that T to the beta characterizes the singularity of the coexistence line, the magnetization [INAUDIBLE]. Now, rather then sitting at H equals to 0, I could have sat exactly at T equals to TC and merely H. So essentially, this is the curve. I say that T equals to TC. This is the blue curve. We see that new blue has this characteristic form that it also comes to 0 not linearly. So there is an exponent that characterizes that. that again is typically written as 1 over delta. So you can see that if I go back to the critical isoterm of the liquid gas system. For the liquid gas system, I would conclude that, say, delta v goes like delta P to the power of delta or one goes with the other to the power of 1 if it. So essentially, the shapes of these two things are very much related, characterized by these two exponents. What else? Another thing that we can certainly measure for a magnet is the susceptibility. So chi-- let's measure it as a function of temperature but for field equal to 0. So basically, I sit eventually at the equals to TC. But I put on a small magnetic field and see how the magnetization changes. You can see that as long as I am above TC in the paramagnetic phase, there is a linear relationship here. Chi is proportional to H. But the proportionality here is the susceptibility. As I get closer to C, that susceptibility diverges. OK, so chi is proportional to T diverges with an exponent that is indicated by gamma. Actually, let me be a little bit more precise. So what I have said is that if I plot the susceptibility as a function of temperature at TC, it diverges. I could also calculate something similar to that below TC. Below TC, it is true that I already have some magnetization. But if I put on a magnetic field, the magnetization will go up. And I can define the slope of that as being the susceptibility below TC. And again, as I approach TC from below, that susceptibility also diverges. So there is a susceptibility that comes like this. So you would say, OK. There's a susceptibility above. And there is a susceptibility below. And maybe they diverge with different exponents. at this point, we don't know. Yes or no. We will show shortly that indeed the two gammas are really the same and using one exponent is sufficient for that story. And analog of susceptibility in the case of the liquid gas would be the compressibility. And the compressibility is related somehow to the inverse of these PV curve slopes. We know that the sign of it has to be negative for stability. But right on the critical isoterm, you see that this slope goes to 0 or its inverse diverges. So there is, again, the same exponent gamma for some magnets also describes that divergence. Susceptibility is an example of a response function. You perturb the system and see how it responds. Another response function that we've seen is the heat capacity where you put heat into this system or you try to change the temperature and see how the heat energy is modified. And again, we experimentally observe. We already saw something like this when we were discussing the super fluid transition, the lambda transition-- that the heat capacity as a function of temperature at some transition temperatures happens to diverge. And in principle, one can again look at the two sides of the transition and characterize them by divergences that are typically indicated by the exponent alpha. Again, you will see that there are reasons why there is only one exponent [INAUDIBLE]. So this is essentially the only part where there is some zoology involved. You have to remember and learn where the different exponents come from. The rest of it, I hope, is very logical-- but which one is alpha, which one's beta, which one's gamma. There's four things, five things you should [INAUDIBLE]. OK? Now, the next thing that I want to do is to show you that the two things that we looked at about the liquid gas transition-- in fact, one of them implies the other. So we said that essentially by continuity, when I look at the shape of this critical isoterm, it come down in this zero slope. When, again, to continuously join the type of curve that I have for T greater than TC of magnetization and T less than TC below, I should have a curve that comes and hogs the axis or has infinite susceptibility. I will show you that the infinite susceptibility does, in fact imply, that you must have collective behavior. That this critical opalescence that we saw is inseparable from the fact that you have diverging susceptibility. I'll show that in the case of the magnet. But it also applies, of course, to the critical opalescence. So let's do a little bit of thermodynamics. So I can imagine if I have some kind of Hamiltonia that describes my interactions among the spins in my argon or any other magnet that I have. And if I was to calculate a partition function, what I need to do would be to trace over all degrees of freedom of this system. Now, for the case of the magnet, it is more convenient actually to look at the ensemble that is corresponding to this. That is, fix the magnetic field and allow the magnetization to decide where it wants to be. So I really want to evaluate things in this ensemble, which really to be precise, is not the [INAUDIBLE] ensemble, but the Gibbs ensemble. So this should be a Gibbs free energy. This should be a Gibbs partition function. But traditionally, most texts including my notes ignore that difference and log of this. Rather than calling it G, we will call F. But that doesn't make much difference. This clearly is a function of temperature and H. Now, if I wanted-- let's imagine that always we put the magnetic field in one direction. So I don't have to worry for the time being about vectorial aspect. When you go back to the vectorial aspect later on, clearly-- actually this is the net magnetization of the system. If I have a piece of iron, it would be the magnetization of the entire iron. And the average of that magnetization, given temperature and field, et cetera, I can obtain by taking the log z by D beta H. Because when I do that, I go back inside the trace, take a derivative in respect to beta H. And I have a trace of M into the minus beta H plus beta HM, which is how the different probabilities are rated once because of the log z. In a derivative, I have a 1 over z to make this more properly normalized probabilities. So that's the standard story. Now if I were to take another derivative, if I were to take a derivative of M with respect to H, which is what the susceptibility is, after all. So the sensitivity of the system-- there is the derivative of magnetization with respect to H. It is the same thing as beta derivative of M with respect to beta H, of course. So I have to go to this expression that I have on top and take another derivative with respect to beta H. The derivative can act on beta H that is in the numerator. And what that does is it essentially brings down another factor of M. And the Z is not touched. Or I leave the numerator as is and take a derivative of the denominator. And the derivative of z I've already taken. So essentially, because it's in the denominator, I will get the minus 1. The derivative of 1 over z will become 1 over z squared. It will also give me that fact factor that multiplies that factor itself. So I have e to the minus beta H plus beta HM. And this whole thing got squared. OK? So a very famous formula-- always true. Response functions such as susceptibilities are related to variances-- in this case, variance of the net magnetization of the system of course true for other responses. OK, this doesn't seem to tell us very much. But then I note the following that-- OK, so I have my magnet. What I have been asking is what is the net magnetization of piece of magnet and its response to adding a magnetic field. If I want to think more microscopically-- if I want to think, go back in terms of what we saw for the liquid gas system and the critical opalescence where there were fluctuations in density all over the place, I expect that the reality also at some particular instance, when I look at this, there will be fluctuations in magnetization from one location of the sample to another location of the sample. It's kind of building gradually in the direction of the statistical field. I kind of expect to have these long wavelength fluctuations. And that's where I want to go. But at this stage, I cannot even worry about that. I can say that I define in whatever way some kind of a local magnetization so that the net magnetization, let's say in V dimension, is the integrated version of the local magnetization. So magnetization density I integrate with the net magnetization. OK, I put that over here. I have two factors of big M. So I will get two factors of integration over R and R prime. Let's stick to three dimensions. Doesn't matter. We can generalize it to be dimensions. And here I have M of R, M of R prime. And average would give me the average of this M of R minus M of R prime. Whereas in the second part, there are the product of two averages-- M and R, average of M and R prime. So it is the covariance. So I basically have to look at the M at some point in my sample, the M at some other point R prime in the sample, calculate the covariance. Now, on average, this piece of iron doesn't know its left corner from its right corner. So just as in the case of the lattice, I expect once I do the averaging, on average, M of R, M of R prime should be only a function of R minus R prime. All right, so I expect this covariance that's indicated by MNC is really only a function of the separation between the two points, OK? Which means that one of these integrations I can indeed perform. There's the integral with respect to the relative coordinate and the center of mass coordinate. For getting boundary dependence, this will give you a factor of V. There was a factor of beta that I forgot. There is an overall beta. So I have chi beta. And then I have the integral over the relative coordinate of the correlations within two spins in the system as a function of separation. Now, of course, like any other quantity, susceptibility is proportional to how much material you have. It's an extensive quantity. So when I say that the susceptibility diverges, I really mean that the susceptibility per unit volume-- the intensive part is the thing that will diverge. But the susceptibility per unit volume is the result of doing an integration of this covariance as a function of position. Let's see what we expect this covariance to do. So as a function of separation, if I look at the covariance between two spins-- OK, so there's this note that I have already subtracted out the average. So whether you are in the ferromagnetic phase or in the paramagnetic phase, the statement is how much is the fluctuation around the average at this point and this other point are related? Well, when the two points come together, what I'm looking is some kind of a variance of the randomness. Now, when I go further and further away, I expect that eventuality, what this does as far as a fluctuation around the averages is concerned, does not influence what is going on very far away. So I expect that as a function of going further and further away, this is something that will eventually die off and go to zero. Let's imagine that there is some kind of characteristic landscape below which the correlations have died off to 0. And I will say that this integral is less than or equal to essentially looking at the volume over which there are correlations. The correlations within this volume would typically be less than sigma squared. But let's bound it by sigma squared times the volume that we're dealing with, which is either four pi over 3c cubed. Coefficients here are not important. OK, so if I know that my response function per unit volume, like my compressibility or susceptibility, if I know that the left hand side as I go to TC is diverging and going to infinity, variance is bounded. I can't do anything with it. Data is bounded. I can't do anything with this. The only thing that I conclude, the only knob I have, is that C must go to infinity. So K over V going to infinity implies and is implied by this [INAUDIBLE]. So I was actually not quite truthful. Because you need to learn one other exponent. So then how the correlation then divergence as a function of temperature is also important and is indicated by an exponent that is called nu. So C diverges as T to some x point under this nu. So this is what you were seeing when I was showing you the critical opalescence. The size of these fluctuations became so large that you couldn't see through the sample. All kinds of wavelengths were taking place in the system. And if I had presented things as square, I could have given you that as a prediction. I should have shown you, say, the critical isoterm looks like this. Therefore, if you look at it at TC, you shouldn't be able to see through it. This is [INAUDIBLE]. All right, so those are the phenomena that we would like to now explain, phenomena being these critical exponent alpha, beta, gamma, and nu, et cetera, being universal and the same across very many different systems. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: Is [INAUDIBLE] in elementary fundamental field theory, like-- I mean, like, in the standard model or in quantum field theory where something like this happens where there's-- I mean, it's not a thermodynamical system. It's like elementary theory. And yet-- PROFESSOR: Yep. In some sense, the masses of the different particles are like correlation lengths. Because it is the mass of the particles like in the nuclear potential that describes the range of the interactions. So there are phenomena such as the case of [INAUDIBLE] or whatever where the range is infinite. So in some sense, those phenomena are sitting at the critical point. OK, so the name of the statistical field theory that we will construct is Landau Ginzburg and originally constructed by Landau in connection to super fluidity. But it can describe a lot of different phase transitions. Let's roughly introduce it in the context of these magnetic systems. So basically, I have my magnet. And again, in principle, I have all kinds of complicated degrees of freedom which are the spins that have quantum mechanical interactions, the exchange interactions, whatever the result of the behavior of electrons and their common interactions with ions is that eventually, something like nickel becomes a ferromagnet at low temperatures. Now, hopefully and actually precisely in the context that we are dealing with, I don't need to know any of all those details. I will just focus on the phenomena that there is a system that undergoes a transition between ferromagnetic and paramagnetic behavior and focus on calculating an appropriate partition function for the degrees of freedom that change their behavior ongoing through TC. So what I expect is that again, just like we saw for the case of liquid gas system, on one side for the magnet, there will be an average zero magnetization in the paramagnet. But there will be fluctuations of magnetization, presumably with long wavelengths. On the other side, there will be these fluctuations on top of some average magnetization that has formed in the system. And if I stick sufficiently close to TC, I expect that that magnetization is small. That's where the exponent beta comes from. So if I go sufficiently close to TC, these new Ms that I have at each location hopefully will not be very big. So maybe in the same sense that when I was doing elasticity by going to low temperature, I could look at small deformations. By sticking in the vicinity of TC, I can look at small magnetization fluctuations. So what I want to do is to imagine that within my system that has lots and lots of electrons and other microscopic degrees of freedom, I can focus on regions that I average. And with each region that I average, I associate magnetization as a function of position, which is a statistical field in the same sense that displacement was. See, over here, I will write this quantity-- the analog of the U-- in fact by using two different vectorial symbols for reasons that will become obvious shortly, hopefully. Because I would like to have the possibility of having R to describe systems that live in D dimensions where D would be one if I'm dealing with a wire. D would be two if I'm dealing with a flat plane. D equals to three in three dimensional space. Maybe there's some relationship to relativistic field theories where we would be four for space time. But M I will allowed to be something else that has components M1, M2, Mn. And we've already seen two cases where n is either 1 or 3. Clearly, if I'm thinking about the case of a ferromagnet, then there are three components of the magnetization. N has to three. But then if I'm looking at the analogous thing for the liquid gas system, the thing that distinguishes different locations is the density-- density fluctuations. And that's a scalar quantity. So that corresponds to N equals to 1. There's other-- then we are dealing with super fluidity where what we are leaving it is a quantum mechanical object that has a phase and an amplitude. It has an x component-- real and imaginary component that corresponds to n equals to 2. Again, n equals to 1 would describe something like liquid gas. N equals to 3 would correspond to something like a magnet. And there's actually no relationship between n and v. N could be larger than 3. So imagine that you take a wire. So x is clearly one-dimensional. But along the wire, you put three component spins. So N could still be true. So we can discuss a whole bunch of different quantities at the same time by generalizing our picture of the magnet to have n component and existing D dimensions. OK? Now, the thing that I would like to construct is that I look at my system. And I characterize it by different configurations of this field M of R. And if I have many examples of the same system at the same temperature, I will have different realizations of that statistical field. I can assign the different realizations some kind of weight of probability, if you like. And what I would like to do is to have an idea of what the weight of probability of different configurations is. Just because I have a background in statistical mechanics, in statistical mechanics, we are used to [INAUDIBLE] weight. So we take the log of weight of probabilities and call them some kind of effective Hamiltonian. And this effective Hamiltonian is distinct from a true microscoping Hamiltonian that describes the system. And it's just a way of describing what the logarithm of the probability for the different configurations is [INAUDIBLE]. Say, well, OK. How do you go and construct this? Well, I say, OK. Presumably, there is really some true microscopic Hamiltonia. And I can take whatever that microscopic Hamiltonia is that has all of my degrees of freedom. And then for a particular configuration, I know what the probability of a true microscopic configuration is. Presumably, what I did was to obtain my M of R by somehow averaging over these other two degrees of freedom. So the construction to go from the true microscopic probabilities to this effective weight is just a change of variable. I have to specify some configuration M of R in my system. That configuration of M of R will be consistent with a huge number of microscopic configurations. I know what the weight of each one of those microscopic configurations is. I sum over all of them. And I have this. Now, of course, if I could do that, I would immediately solve the full problem and I wouldn't need to have to deal with this. Clearly, I can't do that. But I can guess what the eventual form of this is in the same way that I guessed what the form of the statistical field theory for elasticity was just by looking at symmetries and things like that, OK? So in principle, W from change of variables starting from into the minus beta H in practice from symmetries and a variety of other statements and constraints that I would tell you about. Actually, let's keep this. And let's illuminate this. OK. So there is beta H, which is a function of M as a function of this vector x. First thing that I will do is what I did for the case of elasticity, which is to write the answer as an integral in D dimensions of some kind of a density at location x. So this is the same locality type of constraint that we were discussing before and has some caveats associated with that. This is going to be a function of the variable at that location x. So that's the field. But I will also allow various derivatives to appear so that I go beyond just single [INAUDIBLE] will just give me independent things happening at each location by allowing some kind of connections in a neighborhood. And if I go and recruit higher and higher order derivatives, naturally I would have more pips. Somebody was asking me-- you were asking me last time in principle, if the system is something that varies from one position to another position, the very function itself would depend on x. But if we assume that the system is uniform, then we can drop that force. So to be precise, let's do this for the case of five 0 field. Because when you are at zero field in a magnet, the different directions of space are all the same to you. There's no reason to be pointing one direction as opposed to another direction. So because of this symmetry in rotations, in this function, you cannot have M-- actually, you couldn't have M for other reasons. Well, OK. You couldn't have M because it would break the directionality. But you could have something that is M squared. Again, to be precise, M squared is some, let's say, alpha running from 1 to n and alpha of x and alpha of x. Now, if the different directions in space are the same, then I can't have a gradient appearing by itself because it would pick a particular direction. In the same sense that M squared, two Ms have to be appearing together, if the different directions in space forward and backward are to be treated the same, I have to have gradients appearing together in powers of two. So there's a term that, before doing that, let's say also sometimes I will write something that is M to the fourth. M to the fourth is really this quantity M dot M squared. If I write M to the sixth, it is M dot n. Mute and so forth-- so there's all kinds of terms such as that can be appearing in this series. Gradients-- there's a term that I will write symbolically as gradient of M square. And by that, I mean to we take a derivative of n alpha with respect to the alt component and then repeat the same thing sum wrote over both alpha and alt. So that would be the gradient. So basically, the xi appears twice. M alpha appears twice. And you can go and construct higher and higher order terms and derivatives, again ensuring that each index both on the side of the position x and the side of the fields M is a repeated index to respect the symmetries that are involved. So if we-- there is actually maybe one other thing to think about, which is that again like before, I assume that I can make an analytical expansion in M. And who says you are allowed to do an analytical expansion in M? Again, the key to that is the averaging that we have to do in the process. And I want you to think back to the central limit theorem. Let's imagine that there is a variable x where the probability of selecting that variable x is actually kind of singular. Maybe it is something like e to the minus x that is a discontinuity. Maybe it even has an integrable divergence at some point. Maybe it has additional delta function. All kinds of-- it's a very complicated, singular type of a function. Now, I tell you-- add thousands of these variables together and tell me what the distribution of the sum is. And the distribution of the sum, because of the central limit theorem, you know has to look Gaussian. So then take its log, it has a nice analytical expansion. So the point is that again, part of the course grading that we did to get from whatever microscopic degrees of freedom that we have, to reaching the level of this effective field theory, we added many variables together. And because of the central limit theorem, I have a lot of confidence that quite generically, I can make an analytical expansion such as this. But I have not. So having done all of this, you would say that in order to describe this magnet, what we need is to evaluate the partition function, let's say, as a function of temperature, which if I had enormous power, I would do by trace of into the beta H microscopic. But what I have done is I have subdivided different configurations of microscopic degrees of freedom to different configurations of this effective magnetization. And so essentially, that sum is the same as integrating over all configurations of this constrained magnetization field. And the probabilities of these configurations of the magnetization field are exponential of this minus data H that I'm constructing on the basis of principles that I told you-- principles where that I first of all have a locality. I have an integrality of x. And then I have to write terms that are consistent with the symmetry. First term that we saw that is consistent with the symmetry is this M squared. And let's give a name to its coefficient. Just like the coefficient of elasticity, I put it nu over 2. Let's get here a T over 2. There will be higher order terms. There will be M to the fourth. There will be M to the sixth. There will be a whole bunch of things in principle. There will be gradient types of terms. So there will be K over 2, gradient of M squared. There will be L over 2 Laplacian of M squared. There would be higher order terms that involve multiplying M's and various gradient of M, et cetera. And actually, in principle, there could be an overall constant. So there could be up here some overall constant of integration-- let's call it beta F0-- which depends on temperature and so forth but has nothing to do with the ordering of these magnetization type of degrees of freedom. Actually, if I imagine that I go slightly away from the H equals to 0 axis, just I did before, I will add here a term minus beta H of M and evaluate this whole thing. So this is the Landau-Ginzberg theory. In principle, you have to put a lot of terms in the series. Our task would be to show you quite rigorously that just a few terms in the series aren't enough just as in the case of theory of vibration for small enough vibration. In this case, the analog of small enough vibration is to be close enough to the critical point, exactly where we want to calculate these universal exponents and to then calculate the universal exponents, which will turn out to be a hard problem that we will struggle with in the next many lectures. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 10_Perturbative_Renormalization_Group_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So last lecture we started with the strategy of using perturbation theory to study our statistical themes. For example, we need to evaluate a partition function by integrating over all configurations of a field. Let's say n components in d dimensions with some kind of a weight that we can write as e to the minus beta H. And the strategy of perturbation was to find part of this Hamiltonian that we can calculate exactly. And the rest of it, hopefully treating as a small quantity and doing perturbative calculations. Now, in the context of the Landau-Ginzburg theory that we wrote down, this beta H0 was bets described in terms of Fourier modes. So basically, we could make a change of variables to integrate over all configurations of Fourier modes. And the same breakdown of the weight in the language of Fourier modes. Since the underlying theories that we were writing had translational symmetry, every point in space was the same as any other, the composition in to modes was immediately accomplished by going to Fourier representation. And each component of each q-value would correspond to essentially an independent weight that we could expand in some power series in this parameter q, which is an inverse in the wave length. And the lowest order terms are what determines the longer and longer wavelengths. So there is some m of q squared characterizing this part of the Hamiltonian. And since is real space we had emphasized some form of locality, the interaction part in real space could be written simply in terms of a power series, let's say, in m. Which means that if we were to then go to Fourier space, things that are local in real space become non-local in Fourier space. And the first of those terms that we treated as a perturbation would involve an integral over four factors of m tilde. Again, translational invariance forces the four q's that appear in the multiplication to add up to 0. So I would have m of q1 dot for dot with m of q2, m of q3 dot for dot with m of q4, which is minus q1 minus q2 minus q3. And I can go on and do higher order. So once we did this, we could calculate various-- let's say, two-point correlation functions, et cetera, in perturbation theory. And in particular, the two-point correlation function was related to the susceptibility. And setting q to 0, we found an expression for the inverse susceptibility where the 0 order just comes from the t that we have over here. And because of this perturbation calculated to order of u, we had 4u n plus 2 and integral over modes of just the variance of the modes if y like. Now, first thing that we noted was that the location of the point at which the susceptibility vanishes, or susceptibility diverges or its inverse vanishes is no longer at t equals to 0. But we can see just setting this expression to 0 that we have a tc, which is minus 4u n plus 2 integral d dk 2 pi to the d 1 over-- let's put the k here, k squared, potentially higher-order terms. Now, this is an integral that in dimensions above 2-- let's for time being focus on dimensions above 2-- there is no singularity as k goes to 0. k goes to 0, which is long wavelength, is well-behaved. The integral could potentially be singular if I were allowed to go all the way to infinity, but I don't go all the infinity. All of my theories have an underlying short wavelength. And hence, there is a maximum in the Fourier modes which would render this completely well-behaved integral. In fact, if I forget higher-order term, I could put them. But if I forget them, I can evaluate what this correction tc is. It is minus 4u n plus 2 over k. This-- I've been writing-- symmetry at surface area of a d dimensional unit sphere divided by 2 pi to the d. And then I have the integral of k to the d minus 3, which integrates to lambda to the d minus 2 divided by d minus 2. I wrote that explicitly because we are going to encounter this combination a lot of times. And so we will give it a name k sub d. So it's just the solid angle in d dimensions divided by 2 pi to the d. OK, so essentially, in dimensions greater than 2, nothing much happens. There is a shift in the location of the singularity compared to the Gaussian. Because you are no longer at the Gaussian, you are at a theory that has additional stabilizing terms such as m to the fourth, et cetera. So there is no problem now for p going to negative values. The thing that was more interesting was that when we looked at what happens in the vicinity of this new tc, and to lowest order we got this form of a divergence. And then at the next order, I had a correction. Again, coming from this form, 4u n plus 2, an integral. And actually, this was obtained by taking the difference of two of these factors evaluated at p and tc. That's what gave me the t minus tc outside. And then I had an integral that involved two of these factors. Presumably to be consistent to lowest order, I have to evaluate them as small as I can. And so I would have two factors of k squared or k squared plus something. Presumably, higher-order terms. The thing about these integrals as opposed to the previous one is that, again, I can try to look at the behavior at large k and small k. At large k, no matter how many terms I add to the series, ultimately, I will be concerned by cutting it off by lambda. Whereas, if I have something that I have set t equals to 0 in both of these denominator factors, I now have a singularity at k goes to 0 in dimensions less than 4. The integral would blow up in dimensions less than 4 if I am allowed to go all the way to 0, which is arbitrarily long wavelengths. Now in principle, if I am not exactly at tc and I'm looking at singularity as being away from tc, I expect on physical grounds that fluctuations will persist up to some correlation length. So the shortest value of k that I should really physically be able to go to, irrespective of how careful or careless I am with the factors of t and t minus tc that I put here, is of the order of the physical correlation length. And as we saw, this means that there is a correction that is of the form of u k to the 4-- k squared psi to the power of 4 minus d. And I emphasized that the dimensionless combination of the parameter u that potentially can be added as a correction to a number of order of 1 is u k squared divided by some-- multiplied by some length scale to the power of 4 minus d. Above four dimensions, the integral is convergent at small values. And the integral will be dominated and the length scale that would appear here would be some kind of a short distance cutoff, like the averaging length. Whereas in four dimensions with divergence of the correlation length is the thing that will lead this perturbation theory to be kind of difficult and [INAUDIBLE]. So this is an example of a divergent perturbation theory. So what we are going to do in order to be able to make sense out of it, and see how this divergence here can be translated to a change in exponent, which is what we are physically expecting to occur, we reorganize this perturbation theory in a conceptual way that is helped by this perturbative renormalization group approach. So we keep the perturbation theory, but change the way that we look at perturbation theory by appealing to renormalization. So you can see that throughout doing this perturbation theory, I end up having to do integrals over modes that are defined in the space of the Fourier parameter q. And a nice way to implement this coarse graining that has led to this field theory is to imagine that this integration is over some sphere where the maximum inverse wavelength that is allowed, or q number that is allowed is some lambda. And so the task that I have on the first line is to integrate over all modes that live in this. And just on physical grounds, we don't expect to get any singularities from the modes that are at the edge. We expect to get singularities by considering what's going on at long wavelengths or 0q. So the idea of renormalization group was to follow three steps. The first step was to do coarse graining, which was to take whatever your shortest wavelength was, make it b times larger and average. That's in real space. In Fourier space, what that amounts to is to get rid of all of the variations and frequencies that are up to lambda over b. So what I can do is to say that I had a whole bunch of modes m of q. I am going to subdivide them into two classes. I will have the modes sigma of q. Maybe I will write it in different color, sigma of q. That are the ones that sitting here. So these are the sigmas. And these correspond to wave numbers that are between lambda over b and lambda. And I will have a bunch of other variables that I will call m tilde. And this wave close to the singularity, but now removed by an amount lambda over b. So essentially, getting rid of the picture where it had fluctuations at short length scale amounts to integrating over Fourier modes that represent your field that lie in this integral. So I want to do that as an operation that is performed, let's say, at the level of the partition function. So I can say that my original integration can be broken up into integration over this m tilde and the integration over sigma. So that's just rewriting that rightmost integral up there. And then I have a weight exponential. OK, let's write it out explicitly. So the weight is composed of beta H0 and the u. Now, we note that the beta H0 part, just as we did already for the case of the Gaussian, does not mix up these two classes of modes. So I can write that part as an integral from 0 to lambda over b dd 2 pi over d. And these are things that are really inside, so I could also label them by q lesser. So I have m tilde of q lesser squared. And this multiplies t plus k q lesser squared and so forth over 2. I have a similar term, which is the modes that are between lambda over b and lambda. So I just simply changed or broke my overall integration in beta H0 into two parts. Now I have the higher q numbers. And these are sigma of q larger squared. Again, same weight, t plus k q larger squared and so forth over 2. Make sure this minus is in line with this. And then I have, of course, the u. So then I have a minus u. Now, I won't write this explicitly. I will write it explicitly on the next board. But clearly, implicitly it involves both m tilde and sigma mixed up into each other. So I have just rewritten my partition function after subdividing it into these two classes of modes and just hiding all of the complexity in this function that mixes the two. So let's rewrite this. I have integral over the modes that I would like to keep. The m tilde I would like to keep. And there is a weight associated with them that will, therefore, not be integrated. This is the integral from 0 to lambda over b dd k lesser 2 pi to the d t plus k, k lesser q lesser squared, et cetera. And then I have tilde of q lesser squared. Now, if I didn't have this there, the u, I could immediately perform the Gaussian integrals over sigmas. Indeed, we already did this. And the answer would be e to the minus-- there are n-components to this vector. So the answer is going to be multiplied by n. 1/2 is because the square root that I get from each mode. I get a factor of volume integral dd q larger 2 pi to the d integrated from lambda over v to lambda log of t plus k q greater squared and so forth. So if I didn't have the u, this would be the answer for doing the Gaussian integration. But I have the u, so what should I do? The answer is very simple. I write it as e to the minus u m tilde m sigma average. So what I have done is to say that with this weight, that is a Gaussian weight for sigma, I average the function e to the minus u. If you like, this is a Gaussian sigma. So to sort of write it explicitly, what I have stated is an average where I integrate out the high-frequency short wavelength modes is by definition integrate over all configurations of sigma with the Gaussian weight whatever object you have, and then normalize by the Gaussian. Of course, in our case, our O depends both on sigma and m tilde, so the result of these averaging will be a function of in tilde. And indeed, I can write this as an integral over m tilde of q with a new weight, e to the minus beta H tilde, which only depends on m tilde because I got rid of and I integrated over the sigmas. And by definition, my beta H tilde that depends only on m tilde has a part that is the integral from 0 to lambda over b dd q lesser 2 pi to the d the Gaussian weight over the range of modes that are allowed. There is a part that is just this constant term when I take care-- if I write it in this fashion, there is an overall constant. Clearly, what this constant is, is the free energy of the modes that I have integrated, assuming that they are Gaussian, in this interval. The answer is proportional to volume. But as usual, when we are thinking about weights and probabilities, overall constants don't matter. But I can certainly continue to write that over here. So that part went to here, this part went to here. And so the only part that is left is minus log of d to the minus u of m tilde and sigma after I get rid of the sigmas. So far, I have done things that are extremely general. But now I note that I am interested in doing perturbation. So the only place that I haven't really evaluated things is where this u is appearing inside the exponential log average, et cetera. So what I can do is I can perturbatively expand this exponential over here. So I will get log of 1 minus u, which is approximately minus u. So the first term here would be u averaged Gaussian. The next term will be minus 1/2 u squared average minus u average squared and so forth. So you can see that the variance appeared in this stage. And generally, the l-th term in the series would be minus 1 to the l divided by l factorial. And we saw this already. The log of e to the something is the generator of cumulants. So this would be the l-th power of u, the cumulant here would appear. And again, the cumulant would serve the function of cutting off connected pieces as we shall see shortly. So that's what we are going to do. We are going to insert the u's, all things that go beyond the Gaussian-- but initially, just the m to the fourth part-- inside this series and term by term calculate the corrections to this weight that we get after we integrate out the long lambda. Or sorry, the short wavelength modes or the long q modes. OK? So let's focus on this first term. So what is this u that depends on both m tilde and sigma? And I have the expression for u up there. So I can write it as u integral dd q1 dd q2 dd q3 to be symmetric in all of the four q's. I write an integration over the fourth q, but then enforce it by a delta function that the sum of the q's should be 0. And then I have four factors of m, but an m depending on which part of the q space I am encountering is either a sigma or an m tilde. So without doing anything wrong, I can replace each m with an m tilde plus sigma. So depending on where my q1 is in the integrations from 0 to lambda, I will be encountering either this or this. And then I have the dot product of that with m tilde q2 plus sigma of q2. And then I have the dot product that would correspond to m tilde of q3 plus sigma of q3 with m tilde of q4 plus sigma of q4. So that's the structure of my [INAUDIBLE]. And again, what I have to do in principle is to integrate out the sigmas keeping the m tildes when I perform this averaging over here. So let's write down, if I were to expand this thing before the integration, what are the types of terms that I would get? And I'll give them names. One type of term that is very easy is when I have m tilde of q1 dotted with m tilde of Q2, m tilde of q3 dotted with m tilde of q4. If I expand this so there's 2 terms per bracket and there are 4 brackets, so there are 16 terms, only 1 of these terms is of this variety out of the 16. What I will do is also now introduce a diagrammatic representation. Whenever I see an m tilde, I will include a straight line. Whenever I see a sigma, I will include a wavy line. So this entity that I have over here is composed of four of these straight lines. And I will indicate that by this diagram, q1, q2, q3, q4. And the reason is, of course-- first of all, there are four of these. So this is a so-called vertex in a diagrammatic representation that has four lines. And secondly, the lines are not all totally equivalent because of the way that the dot products are arranged. Say, q1 and q2 that are dot product together are distinct, let's say, from q1 and q3 that are not dot product together. And to indicate that, I make sure that there is this dotted line in the vertex that separates and indicates which two are dot product to each other. Now, the second class of diagram comes when I replace one of the m tildes with a sigma. So I have sigma of q1 dotted with m tilde of q2, m tilde of q3 dotted with m tilde of q4. Now clearly, in this case, I had a choice of four factors of m tilde to replace with this. So of the 16 terms in this expansion, 4 of them belong to this class. Which if I were to represent diagrammatically, I would have one of the legs replaced with a wavy line and all the other legs staying as solid lines. The third class of terms correspond to replacing two of the m tildes with sigmas. Now here again, I have a choice whether the second one is a partner of the first one that became sigma, such as this one, sigma of q1 dotted with sigma of q2, m tilde of q3 dotted with m tilde of q4. And then clearly, I could have chosen one pair or the other pair to change into sigmas. So there are two terms that are like this. And diagrammatically, the wavy lines belong to the same branch of this object. OK, next. Keep going. Actually, I have another thing when I replace two of the m tildes with sigma, but now belonging to two different elements of this dot product. So I could have sigma of q1 dotted with m tilde of q2. And then I have sigma of q3 dotted with m tilde of q4. In which case, in each of the pairs I had a choice of two for replacing m tilde with sigma. So that's 2 times 2. There are four terms that have this character. And if I were to represent them diagrammatically, I would need to put two wavy lines on two different branches. And then I have the possibility of three things replaced. So I have sigma of q1 sigma of q2 sigma of q3 m tilde of q4. And again, now it's the other way around. One term is left out of 4 to be m tilde. So this is, again, a degeneracy of 4. And diagrammatically, I have three lines that are wavy and one line that is solid. And at the end of story, 6, I will have one diagram which is all sigmas, which can be represented essentially by all wavy lines. And to check that I didn't make any mistake in my calculation, the sum of these numbers better be 16. So that's 5, 7, 11, 15, 16. All right? Now, the next step of the story is to do these averages. So I have to do the average. Now, the first term doesn't involve any sigmas. All of my averages here are obtained by integrating over sigmas. If there is no sigmas to integrate, after I do the averaging here I essentially get the same thing back. So I will get this same expression. And clearly, that would be a term that would contribute to my beta H tilde, which is identical to what I had originally. It is, again, m to the fourth. So that we understand. Now, the second term here, what is the average that I have to do here? I have one factor of sigma with which I can average. But the weight that I have is even in sigma. So the average of sigma, which is Gaussian-distributed, is 0. So this will give me 0. And clearly, here also there is a term that involves three factors of sigma. Again, by symmetry this will average out to 0. Now, there is a way of indicating what happens here. See, what happens here is that I will have to do an average of this thing. The m tildes are not part of the averaging. They just go out. The average moves all the way over here. And the average of sigma of q1, sigma of q2, I know what it is. It is going to be-- I could have just written it over here. It's 2 pi to the d delta function q1 plus q2 divided by k q squared. Maybe I'll explicitly write it over here. So what we have here is that the average of sigma of q1 with some index sigma of q2 with some other index is-- first of all, the two indices have to be the same. I have a delta function q1 plus q2. And then I have t plus k q1 squared and so forth. So it's my usual Gaussian. So essentially, you can see that one immediate consequence of this averaging is that previously these things had two different momenta and potentially two different indices. They get to be the same thing. And the fact that the labels that were assigned to this, the q and the index alpha, are forced to be the same, we can diagrammatically indicate by making this a closed line. So we are going to represent the result of that averaging with essentially taking-- these two lines are unchanged. They can be whatever they were. These two lines really are joined together through this process. So we indicate them that way. And similarly, when I do the same thing over here, I do the averaging of this and the answer I can indicate by leaving these two lines by themselves and joining these two wavy lines together in this fashion. Now, when you do-- this one we said is 0. So there's essentially one that is left, which is number 6. For number 6, we do our averaging. And for that we have to use- for average or a product of four sigmas that are Gaussian-distributed Wick's theorem. So one possibility is that sigma 1 and sigma 2 are joined, and then sigma 4 and sigma 3 have to be joined. So basically, I took sigma 1 and sigma 2 and joined them, sigma 3 and sigma 4 that I joined them. But another possibility is I can take sigma 1 with sigma 3 or sigma 4. So there are really two choices. And then I will have a diagram that is like this. Now, each one of these operations and diagrams really stands for some integration and result. And let's for example, pick our number 3. For number 3, what we are supposed to do is to do the integration. Sorry. First of all, number 3 has a numerical factor of 2. This is something that is proportional to u when we take the average. I have in principle to do integration over q1 q2 q3 q4. OK. The m tilde of q3 and m tilde of q4 in this diagram were not averaged over. So that term remains. I did the averaging over q1 and q2. When I did that averaging, I, first of all, got a delta alpha alpha because those were two things that were dot product to each other, so they were carrying the same index to start with. I have a 2 pi to the d, a delta function q1 plus q2. And I have t plus k q1 squared. Now, delta alpha alpha. Summing over alpha gives a factor of n. And when you look at these diagrams, quite generally whenever you see a loop, with a loop you would associate the factor of n because of the index that runs and gets summed over. So this answer is going to be proportional to 2 u n. Now, q1 and q2 are said to be 0, the sum. So this is 0. So q3 and q4 have to add up to 0. So the part that involves q3 and q4, the m tilde, essentially I will get an integral dd-- let's say whatever q3. It doesn't matter because it's an index of integration. I have m tilde of q3 squared. And again, q3, it is something that goes with one of these m tildes. So this is an integration that I have to do between 0 and lambda over b. So there is essentially one integration left because q1 and q2 are left to be the same. So this is an integral. Let's call the integration variable that was q1-- I could k, it doesn't matter-- 2 pi to the d the same integral that we've seen before. Except that since this originated from the sigmas, the integration here is from lambda over b lambda. So this is basically a number that I can take, say, out here and regard as a coefficient that multiplies a term that is m tilde squared. And similarly, 4. 4. We said we have four diagrams of his variety, so this would be a contribution that is 4u. I can read out the whole-- write out the whole thing. Certainly, I have all of this. I have all of this. In that case, I have m tilde q3 m tilde-- well, let's see. I have m tilde of q2 m tilde of q4. And they carry different indices because they came from two different dot products. And then I have to do an average over sigma 1 and sigma 3 which carry different indices that are for beta. 2 pi to the d delta function q1 plus q3 e plus k q1 squared. Again, since q1 plus q3 is 0 and the sum of the four q's is 0, these two have to add up to 0. So the answer, again, will be written as 4u integral 0 to lambda over b dd of some q divided by 2 pi to the b m tilde of q squared. And then actually, the same integration, lambda over d lambda dd k 2 pi to the d 1 over 2 plus k squared. So out of the six terms, two are 0. Two are explicitly calculated over here. One is trivially just m to the fourth. The last one is basically summing up all of these things. But these explicitly do not depend on m tilde. So I'll just call the result of doing all of this sum V delta f v at level 1. In the same way that integrating the modes sigma that I'm not interested and averaging over them gave a constant of integration, that constant of integration gets corrected to order of u over here. I don't need to explicitly take care of it. So given all of this information, let's write down what our last line from the previous board is. So our intent was to calculate a weight that governed these coarse-grained modes. And our answer is that, first of all, we will get a bunch of constants delta f v 0 plus delta f v 1 that we don't really care. They're just an overall change that doesn't matter for the probabilities. It's just contribution to the free energy. And then we start to get things. And to the lowest order what we had was replacing the Gaussian weight, but only over this permitted set of wavelengths. So I have dd q lesser, let's say, 2 pi to the d t plus k q lesser squared and so forth. Divided by 2 m tilde of q squared. Then, term number 1 in the series gave me what? It gave me something that was equivalent to my u if I were to Fourier transform back to real space, m to the fourth. Except that my cutoff has been shifted by lambda over b. So I don't want to bother to write down that full form in terms of Fourier modes. Essentially, if I want to write this explicitly, it is just like that line that I have, except that for the integrations I'll have to explicitly indicate 0 to lambda over b. So the only terms that we haven't included are the ones that are over here. Now, you look at those terms and you find that the structure of these terms is precisely what we have over here. Except that there is a modification. There is a constant term that is added from this one and there's a constant term that is added from that one. So the effect of those things I can capture by changing the parameters t to something else t tilde. So you can see that to order of u squared that I haven't calculated, to order of u, the only effect of this coarse graining is to modify this one parameter so that t goes to t tilde. It certainly depends on how much I coarse grain things. And this is the original t plus the sum of these things. So I will have 2-- n plus 2 u integral dd k 2 pi to the d 1 over t plus k k squared. And presumably, higher-order terms are allowed, going from 0 to lambda over here. AUDIENCE: Question. PROFESSOR: Yes. Question? AUDIENCE: Yeah. When you have an integration over x, if you have previously defined lambda to be a cutoff in k space, might it be-- is it 1 over b lambda then? Or, is it b over lambda? Because-- PROFESSOR: OK. You're right. So previously, maybe the best way to write this would have been that there is a shortest length scale a. So I should really indicate what's happening here as shortest length scale having gone to v. And there is always a relationship between the a and lambda, which is inverse relation, but there are factors of 2 pi and things like that which I don't really want to bother. It doesn't matter. Yes. AUDIENCE: Shouldn't it be t tilde is equal to 2 plus 4 multiplied by? PROFESSOR: Exactly. Good. Because the coefficients here are divided by 2. So that 2 I forgot. And I should restore it. And if I had gone a little bit further, I would have then started comparing this formula with this formula. And I realized that I should have had the 4. Clearly, the two formula are telling me the same thing. You can see that they are almost exactly the same with the exception of how much integration. Yes? AUDIENCE: One other thing. Are bounds of those integrals for your tb, shouldn't they be lambda over d to lambda? PROFESSOR: Yes. Lambda over d to lambda. And it is because of that that I don't really have to worry because we saw that when we were doing straightforward perturbation theory, the reason that perturbation theory was blowing up in my face was integrating all the way to the origin. And the trick of renormalization group is by averaging of there, I don't really yet reach the singularity that I would have at k equals to 0. This integral by itself is not problematic at k equals to 0, but future integrals would. Any other mistakes? No. All right. But the other part of this story is that the effect of this coarse graining at lowest order mu was to modify this parameter t. But importantly, to do nothing else. That is, we can see that in this Hamiltonian that we had written, the k that is rescaled is the same as the old k. And the u is the same as the old u. That is, to the lowest order the only effect of coarse graining was to modify the parameter that covers coefficient of t squared. But we have not completed our task of constructing a renormalization group. Renormalization group had three steps. The most difficult step, which was the coarse graining we have completed, but now we have generated a grainy picture in which the shortest wavelengths are a factor of b larger than what we had before. In order to make our pictures look the same, we had to do steps 2 and 3 of rg. Step 2 was to shrink all of the lengths in real space, which amounts to q prime being b times q. And that will restore the upper part of the q integration to be lambda. And there was a rescaling that we had to perform for the magnitude of the fluctuations which amounted to replacing the m tilde of q with z N prime. So this would be, if you like, q lesser and this would be m prime. Now, the reason these steps are trivial is because just whenever I see a q, I replace it with b inverse q prime. Whenever I see an m tilde, I replace it with z m prime. So then I will find the Hamiltonian that characterizes the m prime variables, the rg implemented variables, which is-- OK, there is a bunch of constants out front. There is the fv. Actually, it's 0 delta fv1. In the sign, I had to put as plus. But really, it doesn't matter. Then, I go and write down what I have. The integration over q prime after the rescaling is performed goes back to the same cutoff or the same shortest wavelength as before. Except that when I do this replacement of q lesser with q prime, I will get a factor of b to the minus d down here. And there are b integrations. And then I have t tilde. The next term is k tilde, but k tilde is the same thing as k. Goes with q squared. So this becomes q prime squared, and then I will get a b to the minus 2. Higher orders will get more factors of b to the minus 2 over 2. And then I have m tilde, which is replaced by z. And there are two of them. I will get m prime of q prime squared. If I were to explicitly now write the factors that go in construction of u, since u had three integrations over q-- left board over there-- I will get three integrations over q prime giving me a factor of b to the minus 3d. And then I have four factors of m tilde that become m prime. Along the way, I will pick up four factors of z. And then, of course, order of u squared. So under this three steps of rg, what happened was that I generated t prime, which was z squared b to the minus d t tilde. I generated u prime, which was z to the fourth b to the minus 3d u, the original u. And I generated the k prime, which was z squared b to the minus d minus 2 k. And I could do the same for various other parameters. Now again, we come up with this issue of what to choose for zeta. Sorry, for z. And what I had said previously was that we went through all of these exercise of doing the Gaussian model via rg in order to have an anchoring point. And there we saw that the thing that we were interested was to look at the point where k prime was the same as k. So let's stick with that choice and see what the consequences are. So choose z such that k prime is the same a k, which means the I choose my z to be b to the 1 plus d over 2 exactly as I had done previously for the Gaussian model. So now I can substitute these values. And therefore, see that following a rescaling by a factor of b, the value of my t prime. z squared b to the minus d. I will get b to the power of 2 plus d minus d. So that would give me b squared. And I have t plus 4u n plus 2. This integral from lambda over b to lambda to dk 2 pi to the d 1 over d plus k, k squared, and so forth. Presumably, order of u squared. And that my factor of u rescaled by b. I have to put four factors of z. So I will get b to the 4 plus 2d, and then 3d gets subtracted, so I will get b to the 4 minus d and u again. But presumably, order of u squared. So the first factors are precisely the things that we had done and obtained for the Gaussian model. So the only thing that we gained by this exercise so far is this correction to t. We will see that that correction is not that important. And in order to really gain an understanding, we have to go to the next order. But let's use this opportunity to also make some changes of terminology that is useful. So clearly, the way that we had constructed this-- and in particular, if you are thinking about averaging over spins, et cetera, in real space-- the natural thing to think about is that maybe your b is a factor of 2, or a factor of 3. You sort of group things by a factor of twice as much and then do the averaging. But when you look at things from the perspective of this momentum shell, this b can be anything. And it turns out to be useful just as a language tool, not as a conceptual tool, to make this b to be very close to 1 so that effectively you are just removing a very, very tiny, thin shell around the boundary. So essentially what I am saying is choose b that is almost 1, but maybe a little bit shifted from 1. Then clearly, as b goes to 1, then t prime has to go to t-- prime has to go to u. So what I can do is I can define t prime scaled by a factor of 1 plus delta l to be t plus a small amount dt by dl. And in order of delta l squared. And similarly, I can write u prime to be u plus delta l du by dl and higher order. And the reason is that if we now look at the parameter space-- t, u, whatever-- the effect of this procedure rather than being some jump from some point to another point because we rescaled by a factor of b is to go to a nearby point. So these things, dt by dl, du by dl, essentially point to the direction in which the parameters would change. And so they allow you to replace these jumps that you have by things that are called flows in this parameter space. So basically, you have constructed flows and vectors that describe the flows in this parameter space. Now, if I do that over here, we can also see some other things emerging. So b squared I can write as 1 plus 2 delta l. And then here, I have t. Now clearly here, if d was 1, the integral would be 0. I am integrating over a tiny shell. So the answer here when b goes to 1 is really just the area of that sphere multiplied by the thickness. And so what do I get? I get 4u n plus 2 is just the overall coefficient. Then, what is the surface area? I have the solid angle divided by 2 pi to the d. OK, you divide solid angle divided by 2 pi to the d to be kd. What's the value of the integrand on shell? On shell, I have to replace k with the lambda. So I will get t plus k lambda squared. Actually, the surface area is Sd lambda to the d minus 1. But then the thickness is lambda delta l. The second one is actually very easy. It is 1 plus 4 minus b delta l times u. So you can see that if I match things to order of delta l, I get the following rg flows. So the left-hand side is t plus delta l dt by dl. The right-hand side-- if I expand, there is a factor of t. So that it gets rid of. And I'm left with a term that is proportional to delta l, whose coefficient on the left-hand side is dt by dl. And the right-hand side, I get either a factor of 2t from multiplying the 2 delta l with t or from multiplying the 1 with the result of this integration. I will get a 4u n plus 2 kd lambda to the d divided by t plus k lambda squared. And I don't need to evaluate any integrals. And the second flow equation for the parameter u du by dl is 4 minus d u. Now, clearly in the language of flows, a fixed point is when there is no flow. So I could have dt by dl du by dl need to be 0. And du by dl being 0 immediately tells ms that u star has to be 0. And if I set u star equals to 0, I will see that t star has to be 0. So these equations have one and only one fixed point at this order. And then looking for relevance and irrelevance of going away from the fixed point can be captured by linearizing. That is, I write my t to be t star plus delta l. Of course, my t star and u star are both 0. But in general, if they were not at 0, I would linearize my in general, non-linear rg recursions by going slightly away from the fixed point. And then the linearized form of the equation would say that going away be a small amount delta t delta u-- and there could be more and more of these operators-- an be written in terms of a matrix multiply delta t and delta u. And clearly, the matrix for u is very simple. It is simply proportional to 4 minus d delta u. The matrix for t? Well, there's two terms. First of all, there is this 2. And then if I am taking a derivative, there will be a derivative of this expression because there is a t-dependence here. But ultimately, since I'm evaluating it at u star equals to 0, I don't need to include that term here. But if I now make a variation in u, I will get an off-diagonal term here, which is k lambda squared. So these two can combine with each other. Now, looking for relevance or irrelevance is then equivalent-- previously, we were talking about it in terms of the full equations with b that could have been anything, but now we have gone to this limit of infinitesimal b. Then, what I have to do is to find the eigenvalues of the matrix that I have for these flows. Now, a matrix that has this structure where there's 0 on one side of the diagonal, I immediately know that the eigenvalues are 2 and 4 minus d. So basically, depending on whether I'm in dimensions greater than 4 or less than 4, I will have either two or one relevant direction. In particular, if I look at what is happening for d that is greater than 4-- for d greater than 4, I will just have one relevant direction. And if I look at the behavior and the flows that I am allowed to have in the two parameters t and u. Now, if my Hamiltonian only has t and u, I'm only allowed to look at the case where u is positive in order not to have weights that are unbounded. My fixed point occurs at 0, 0. And then one simple thing is that if u is 0, it will stay 0. And then dt by dl is 2t. So I know that I always have an eigen-direction that is along the t and is flowing away with a velocity of 2 if you like. The other eigen-direction is not the axis t equals to 0. Because if t is 0 originally but 0 is nonzero, this term will generate some positive amount of t for me. So if I start somewhere on this axis, t will go in the direction of becoming positive. Above four dimensions, you will become less. So above four dimension, you can see that the general trend of flows is going to be something like this. Indeed, there has to be a second eigen-direction because I'm dealing with a 2 by 2 matrix. And if you look at it carefully, you'll find that the second eigen-direction is down here and corresponds to a negative eigenvalue. So I basically would be having flows that go towards that. And in general, the character of the flows, if I have parameters somewhere here, they would be flowing there. If I have parameters somewhere here, they would be flowing there. So physically, I have something like iron in five dimension, for example. And it corresponds to being somewhere here. When I change the temperature of iron in five dimension and I execute some trajectory such as this, all of the points that are on this side of the trajectory on their flow will go to a place where u becomes small and t becomes positive. So I will essentially go to this Gaussian-like fixed point that describes independent spins. All the things that are down here, which previously in the Gaussian model we could not describe, because the Gaussian model did not allow me to go to t negative. Because now I have a positive view, I have no problem with that. And I find that essentially I go to large negative t corresponding to some positive u. I can figure out what the amount of magnetization is. And so then in between them, there is a trajectory that separates paramagnetic and ferromagnetic behavior. And clearly, that trajectory at long scales corresponds to the fixed point that is simply the gradient squared. Because all the other terms went to 0, so we know all of the correlation functions, et cetera, that we should see in this system. Unfortunately, if I look for d less than 4, what happens is that I still have the same fixed point. And actually , the eigen-directions don't change all that much. u equals to 0 is still an eigen-direction that has relevance, too. But the other direction changes from being irrelevant to being relevant. So I have something like this. And the natural types of flows that I get kind of look like this. And now if I take my iron in three dimensions and change its temperature, I go from behavior that is kind of paramagnetic-like to ferromagnetic-like. And there is a transition point. But that transition point I don't know what fixed point it goes to. I have no idea. So the only difference by doing this analysis from what we had done just on the basis of scaling and Gaussian theory, et cetera, is that we have located the shift in t with u, which is what we had done before. So was it worth it? The answer is up to here, no. But let's see if you had gone one step further what would have happened. The series is an alternating series. So the next order term that I get here I expect will come with some negative u squared. So let's say there will be some amount of work that I have to do. And I calculate something that is minus a u squared. I do some amount of work and I calculate something that is minus b u squared. But then you can see that if I search for the fixed point, I will find another one at u star, which is 4 minus d over b. So I will find another point. And indeed, we'll find that things that left here will go to that point. And so that point has one direction here. It was relevant that becomes irrelevant. And the other direction is the analog of this direction, which still remains relevant, but with modified exponents. We will figure out what that is. So the next step is to find this b, and then everything would be resolved. The only thing to then realize is, what are we perturbing it? Because the whole idea of perturbation theory is that you should have a small parameter. And if we are perturbing a u and then basing our results on what is happening here, the location of this fixed point better be small-- close to the original one around which I am perturbing. So what do I have to make small? 4 minus d. So we thought we were making a perturbation in u, but in order to have a small quantity the only thing that we can do is to stay very close to four dimension and make a perturbation expansion around four dimension. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 13_Position_Space_Renormalization_Group_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So we are going to change perspective again and think in terms of lattice models. So for the first part of this course, I was trying to change your perspective from thinking in terms of microscopic degrees of freedom to a statistical field. Now we are going to go back and try to build pictures around things that look more microscopic. Typically, in many solid state configurations, we are dealing with transitions that take place on a lattice. For example, imagine a square lattice, which is easy to draw, but there could be all kinds of cubic and more complex lattices. And then, at each side of this lattice, you may have one microscopic degrees of freedom that is ultimately participating in the ordering and the phase transition that you have in mind. Could, for example, be a spin, or it could be one atom in a binary mixture. And what you would like to do is to construct a partition function, again, by summing over all degrees of freedom. And we need some kind of Hamiltonian. And what we are going to assume governs that Hamiltonian is the analog of this locality assumption that we have in statistical field theories. That is, we are going to assume that one of our degrees of freedom essentially talks to a small neighborhood around it. And the simplest neighborhood would be to basically talk to the nearest neighbor. So if I, for example, assign index i to each side of the lattice, let's say I have some variable at each side that could be something. Let's call it SI. Then my partition function would be obtained by summing over all configurations. And the weight I'm going to assume in terms of this lattice picture to be a sum over interactions that exist between pairs of sides. So that's already an assumption that it's a pairwise thing. And I'm going to use this symbol ij with an angular bracket along it, around it, to indicate the sum over nearest neighbors. And there is some function of the variables that live on these neighbors. So basically, in the picture that I have drawn, interactions exist only between the places that you see lines. So that this pin does not interact with this pin, this pin, or this pin, but it interacts with these four pins, to which it is near neighbor. Now clearly, the form of the interaction has to be dictated by your degrees of freedom. And the idea of this representation, as opposed to the previous statistical field theory that we had, is that in several important instances, you may want to know not only what these universal properties are, but also, let's say, the explicit temperature or phase diagram of the system as a function of external parameters as well as temperature. And if you have some idea of how your microscopic degrees of freedom interact with each other, you should be able to solve this kind of partition function, get the singularities, et cetera. So let's look at some simple versions of this construction and gradually discuss the kinds of things that we could do with it. So some simple models. The simplest one is the Ising model where the variable that you have at each side has two possibilities. So it's a binary variable in the context of a binary alloy. It could be, let's say, atom A or atom B that is occupying a particular site. There are also cases where there will be some-- if this is a surface and you're absorbing particles on top of it, there could be a particle sitting here or not sitting here. So that would be also another example of a binary variable. So you could indicate that by empty or occupied, zero or one. But let's indicate it by plus or minus one as the two possible values that you can have. Now, if I look at the analog of the interaction that I have between two sites that are neighboring each other, what can I write down? Well, the most general thing that I can write down is, first of all, a constant I can put, such as shift of energy. There could be a linear term in-- let's put all of these with minus signs. Sigma i and sigma j. I assume that it is symmetric with respect to the two sides. For reasons to become apparent shortly, I will divide by z, which is the coordination number. How many bonds per side? And then the next term that I can put is something like j sigma i sigma j. And actually, I can't put anything else. Because if you s of this as a power series in sigma i and sigma j, and sigma has only two values, it will terminate here. Because any higher power of sigma is either one or sigma itself. So another way of writing this is that the partition function of the Ising model is obtained by summing over all 2 to the n configurations that I have. If I have a lattice of n sides, each side can have two possibilities of a kind of Hamiltonian, which is basically some constant plus h sum over i sigma i kind of field that prefers one side or the other side. So it is an analog of a magnetic field in this binary system. And basically, I convert it from a description that is overall balanced to a description overall sides. And that's why I had put the coordination number there. It's kind of a matter of convention. And then a term that prefers neighboring sites to be aligned as long as k positive. So that's one example of a model. Another model that we will also look at is what I started to draw at the beginning. That is, at each site, you have a vector. And again, going in terms of the pictures that we had before, let's imagine that we have a vector that has n components. So SI is something that is in RN. And I will assume that the magnitude of this vector is one. So essentially, imagine that you have a unit vector, and each site that can rotate. So if n equals to one, you have essentially one component vector. Its square has to be one, so the two values that it can have are plus one and minus one. So the n equals to one case is the Ising model again. So this ON is a generalization of the Ising model to multiple components. n equals to two corresponds to a unit vector that can take any angle in two dimensions. And that is usually given the name xy model. n equals to three is something that maybe you want to use to describe magnetic ions in this lattice. And classically, the direction of this ion could be anywhere. The spin of the ion can be anywhere on the surface of a sphere. So that's three components, and this model is sometimes called the Heisenberg model. Yes. AUDIENCE: In the Ising model, what's the correspondence between g hat h hat j hat and g, h, and k? PROFESSOR: Minus beta. Yeah. So maybe I should have written. In order to take this-- if I think of this as energy pair bond, then in order to get the Boltzmann weight, I have to put a minus beta. So I would have said that this g, for example, is minus beta g hat. k is minus beta j hat. And h is minus beta h hat. So I should have emphasized that. What I meant by b-- actually, I wrote b-- so what I should have done here to be consistent, let's write this as minus beta. Thank you. That was important. If I described these b's as being energies, then minus beta times that will be what will go in the Boltzmann factor. OK So whereas these were examples that we had more or less seen their continuum version in the Landau-Ginzburg model, there are other symmetries that get broken. And things for which we didn't discuss what the corresponding statistical field theory is. A commonly used case pertains to something that's called a Potts model. Where at each side you have a variable, let's call it SI, that takes q values, one, two, three, all the way to q. And I can write what would appear in the exponent minus beta h to be a sum over nearest neighbors. And I can give some kind of an interaction parameter here, but a delta si sj. So basically, what it says is that on each side of a lattice, you put a number. Could be one, two, three, up to q. And if two neighbors are the same, they like it, and they gain some kind of a weight. That is, if k is positive, encourages that to happen. If the two neighbors are different, then it doesn't matter. You don't gain energy, but you don't really care as to which one it is. The underlying symmetry that this has is permutation. Basically, if you were to permute all of these indices consistently across the lattice in any particular way, the energy would not change. So permutation symmetry is what underlies this. And again, if I look at the case of two, then at each side, let's say I have one or two. And one one and two two are things that gain energy. One two or two one don't. Clearly it is the same as the Ising model. So q equals to two is another way of writing the Ising model. Q equals to three is something that we haven't seen. So at each site, there are three possibilities. Actually, when I started at MIT as a graduate student in 1979, the project that I had to do related to the q equals to three Potts model. Where did it come from? Well, at that time, people were looking at surface of graphite, which as you know, has this hexagonal structure. And then you can absorb molecules on top of that, such as, for example, krypton. And krypton would want to come and sit in the center of one of these hexagons. But its size was such that once it sat there, you couldn't occupy any of these sides. So the next one would have to go, let's say, over here. Now, it is possible to subdivide this set of hexagons into three sub lattices. One, two, three, one, two, three, et cetera. Actually, I drew this one incorrectly. It would be sitting here. And what happens is that basically the agile particles would order by occupying one of three equivalent sublattices. So the way that that order got destroyed was then described by the q equals to three Potts universality class. You can think of something like q equals to four that would have a symmetry of a tetragon. And so some structure that is like a tetragon getting, let's say, distorted in some particular direction would then have four equivalent directions, et cetera. So there's a whole set of other types of universality classes and symmetry breakings that we did not discuss before. And I just want to emphasize that what we discussed before does not cover all possible symmetry breakings. It was just supposed to show you an important class and the technology to deal with that. But again, in this particular system, let's say you really wanted to know at what temperature the phase transition occurs, as well as what potential phase diagrams and critical behavior is. And then you would say, well, even if I could construct a statistical field theory and analyze it in two dimensions, and we've seen how hard it is to go below some other critical dimension, it doesn't tell me things about phase diagrams, et cetera. So maybe trying to understand and deal with this lattice model itself would tell us more information. Although about quantities that are not necessarily inverse. Depending on your microscopic model, you may try to introduce more complicated systems, such as inspired by quantum mechanics, you can think of something that I'll call a spin S model in which your SI takes values from minus s, minus s plus 1, all the way to plus s. There's 2s plus 1 possibilities. And you can think of this as components of, say, a quantum spin of s along the zed axis. Write some kind of Hamiltonian for this. But as long as you deal with things classically, it turns out that this kind of system will not really have different universality from the Ising model. So let's say we have this lattice model. Then what can we do? So in the next set of lectures, I will describe some tools for dealing with these models. One set of approaches, the one that we will start today, has to do with the position space renormalization group. That is the approach that we were following for renormalization previously. Dealt with going to Fourier space. We had this sphere, hyper sphere. And then we were basically eliminating modes at the edge of this sphere in Fourier space. We started actually by describing the process in real space. So we will see that in some cases, it is possible to do a renormalization group directly on these lattice models. Second thing is, it turns out that as combinatorial problems, some, but a very small subset of these models, are susceptible to exact solutions. Turns out that practically all models in one dimension, as we will start today, one can solve exactly. But there's one prominent case, which is the two dimensionalizing model that one can also solve exactly. And it's a very interesting solution that we will also examine in, I don't know, a couple of weeks. Finally, there are approximate schemes that people have evolved for studying these problems, where you have series expansions starting from limits, where you know what is happening. And one simple example would be to go to very high temperatures. And at high temperatures, essentially every degree of freedom does what it wants. So it's essentially a zero dimensional problem that you can solve. And then you can start treating interactions perturbatively. So this is kind of similar to the expansions that we had developed in 8 333, the virial expansions, et cetera, about the ideal gas limit. But now done on a system that is a lattice, and going to sufficiently high order that you can say something about the phase transition. There is another extreme. In these systems, typically the zero temperature state is trivial. It is perfectly ordered. Let's say all the spins are aligned. And then you can start expanding in excitations around that state and see whether eventually, by including more and more excitations, you can see the phase transition out of the ordered state. And something that is actually probably the most common use of these models, but I won't cover in class, is to put them on the computer and do some kind of a Monte Carlo simulation, which is essentially a numerical way of trying to generate configurations that are governed by this weight. And by changing the temperature as it appears in that weight, whether or not one can, in the simulation, see the phase transition happen. So that's the change in perspective that I want you to have. So the first thing that we're going to do is to do the number one here, to do the position space renormalization group of one dimensional Ising model. And the procedure that I describe for you is sufficiently general that in fact you can apply to any other one dimensional model, as long as you only have these nearest neighbor interactions. So here you have a lattice that is one dimensional. So you have a set of sites one, two. At some point, you have i, i minus one, i plus one. Let's say we call the last one n. So there are n sites. There are going to be 2 to the n possible configurations. And your task is given that at each site, there's a variable that is binary. You want to calculate a partition function, which is a sum over all these 2 to the n configurations. Of a weight that is this e to the sum over i B, the interaction that couples SI and SI plus 1. Maybe I should have called this e hat. And B is the thing that has minus beta absorbed in it. So notice that basically, the way that I have written it, one is interacting with two. i minus one is interacting with i. i is interacting with i plus one. So I wrote the nearest neighbor interaction in this particular fashion. We may or may not worry about the last spin, whether I want to finish the series here, or sometimes I will use periodic boundary condition and bring it back and couple it to the first one, where I have a ring. So that's another possibility. Doesn't really matter all that much at this stage. So this runs for i going from one to n. There are n degrees of freedom. Now, renormalization group is a procedure by which I get rid of some degrees of freedom. So previously, I have emphasized that what we did was some kind of an averaging. So we said that let's say I could do some averaging of three sites and call some kind of a representative of those three. Let's say that we want to do a RG by a factor of b equals to two. So then maybe you say that I will pick sigma i prime and u sigma i to be sigma i plus sigma i plus 1 over 2, doing some kind of an average. The problem with this choice is that if the two spins are both pluses, I will get plus. If they're both minuses, I will get minus. If there is one plus and one minus, I will get zero. Why that is not nice is that that changes the structure of the theory. So I started with binary variables. I do this rescaling. If I choose this scheme, I will have three variables per site. But I can insist upon keeping two variables per site, as long as I do everything consistently and precisely. So maybe I can say that when this occurs, where the two sites are different, and the average would be zero, I choose as tiebreaker the left one. So then I will have plus or minus. Now you can convince yourself that if I do this, and I choose always the left one as tiebreaker, the story is the same as just keeping the left one. So essentially, this kind of averaging with a tiebreaker is equivalent to getting rid of every other spin. And so essentially what I can do is to say that I call a sigma i prime. Now, in the new listing that I have, this thing is no longer, let's say, the tenth one. It becomes the fifth one because I removed half of things. So sigma i prime is really sigma 2i minus 1. So basically, all the odd ones I will call to be my new spins. All the even ones I want to get rid of, I'll call them SI. So this is just a renaming of the variables. I did some handwaving to justify. Effectivity, all I did was I broke this sum into two sets of sums, but I call sigma i prime and SI. And each one of them the index i, rather than running from 1 to n in this new set, the index i runs from 1 to n over 2. So what I have said, agian, is very trivial. I've said that the original sum, I bring over as a sum over the odd spin, whose names I have changed, and a sum over even spins, whose names I have called SI. And I have an interaction, which I can write as sum over i, essentially running from 1 to n over 2. I start with the interaction that involves sigma i prime with si because now each sigma i prime is acting with an s on one side. And then there's another interaction, which is SI, and the next. So essentially, I rename things, and I regrouped bonds. And the sum that was n terms now n over 2 pairs of terms. Nothing significant. But the point is that over here, I can rewrite this as a sum over sigma i prime. And this is a product over terms where within each term, I can sum over the spin that is sitting between two spins that I'm keeping. So I'm getting rid of this spin that sits between spin sigma i prime and sigma i plus 1 prime. Now, once I perform this sum over SI here, then what I will get is some function that depends on sigma i prime and sigma i plus 1 prime. And I can choose to write that function as e to the b prime sigma i prime sigma i plus 1 prime. And hence, the partition function after removing every other spin is the same as the partition function that I have for the remaining spins weighted with this b prime. So you can see that I took the original partition function and recast it in precisely the same form after removing half of the degrees of freedom. Now, the original b for the Ising model is going to be parameterized by g, h, and k. So the b prime I did parameterize. So this, let's say, emphasizes parameterized by g, h, k. I can similarly parameterize this by g prime, h prime, k prime. And how do I know that? Because when I was writing this, I emphasized that this is the most general form that I can write down. There is nothing else other than this form that I can write down for this. So what I have essentially is that this e to the b prime, which is e to the g prime plus h prime sigma i prime plus sigma i plus 1 prime plus k prime sigma i prime sigma i plus 1 prime involves these three parameters, is obtained by summing over SI. Let me just call it s being minus plus 1 of e to the g plus h sigma 1 sigma i prime plus SI plus k sigma i prime SI plus g plus kh sigma SI plus sigma i plus 1 prime plus k SI sigma i plus 1. So it's an implicit equation that relates g prime, h prime, k prime, to g, h, and k. And in particular, just to make the writing of this thing explicit more clearly, I will give names. I will call e to the k to the x, e to the h to by, e to the g to bz. And here, similarly, I will write x prime e to the k prime, y prime e to the h prime, and z prime is e to the g prime. So now I just have to make a table. I have sigma i prime sigma i plus 1 prime. And here also I can have values of s. And the simplest possibility here is I have plus plus. Actually, let's put this number further out. So if both the sigma primes are plus, what do I have on the left hand side? I have e to the g prime, which is z prime. e to the 2h prime, which is y prime squared. e to the k prime, which is x prime. What do I have on the left hand side? On the left hand side, I have two possible things that I can put. I can put s to be either plus or s to be minus, and I have to sum over those two possibilities. You can see that in all cases, I have e to the 2g. That's a trivial thing. So I will always have a z squared. Irrespective of s, I have two factors of e to the h sigma prime. OK, you know what happened? I should've used-- since I was using b, this factor of h over 2. So I really should have put here an h prime over 2, and I should have put here an h over 2 and h over 2 because the field that was residing on the sites, I am dividing half of it to the right and half of it to the left bond. And since I'm writing things in terms of the bonds, that's how I should go. So what I have here actually is now one factor of i prime. That's better. Now, what I have on the right is similarly one factor of y from the h's that go with sigma 1 prime, sigma 2 prime. And then if s is plus, then you can see that I will get two factors of e to the k because both bonds will be satisfied. Both bonds will be plus plus. So I will get x squared, and the contribution to the h of the intermediate bond s is going to be 1e to the h. So I will get x squared y. Whereas if I put it the intermediate sign for s to be minus 1, then I have two pluses at the end. The one in the middle is minus. So I would have two dissatisfied factors of e to the k becoming e to the minus k. So it's x to the minus 2. And the field also will give a factor of e to the minus h or y inverse. So there are four possibilities here. The next one is minus minus. z prime will stay the way it is. The field has switched sign. So this becomes y prime inverse. But since both of them are minus-- minus, minus-- the k factor is satisfied and happy. Gives me x prime because it's e to the plus k. On the right hand side, I will always get the z squared. The first factor becomes the inverse. Now, s plus 1 is a plus site, plus spin, that is sandwiched between two minus spins. So there are two unhappy bonds. This gives me x to the minus 2. Why the spin is pointing in the direction of the field. So there's the y here. And here I will get x squared and y inverse because now I have three negative signs. So all the k's are happy. There's the next one, which is plus and minus. Plus and minus, we can see that the contribution to the field vanishes. Because sigma i prime and plus sigma i plus 1 prime are zero. I will still get z, the contribution, because plus and minus, the bond between them gives me e to the minus k prime. I have here z squared. There is no overall contribution of y for the same reason that there was nothing here. But from s, I will get a factor of y plus y inverse. And there's no contribution for x because I have a plus and a minus. And if the spin is either plus or minus in the middle, there will be one satisfied and one dissatisfied bond. Again, by symmetry, the other configuration is exactly the same. So while I had four configuration, and hence four things to match, the last two are the same. And that's consistent with my having three variables, x prime, y prime, and z prime, to solve. So there are three equations and three variables. Now, to solve these equations, we can do several things. Let's, for example, multiply all four equations together. What do we get on the left hand side? There are two x's, two inverse x's, y, and inverse y, but four factors of z. So I get z prime to the fourth factor. On the other side, I will get z to the eight, and then the product of those four factors. I can divide one, equation one, by equation two. What do I get? The x's cancel. The x's cancel. So I will get y prime squared. On the other side, divide these two. I will get y squared. x squared y plus x squared y inverse, x minus 2y plus x2 minus-- yeah, x2 y inverse. And finally, to get the equation for x prime, what I can do is I can multiply 1 and 2, divide by 3 and 4. And if I do that, on the left hand side I will get x prime to the fourth. And on the right hand side, I will get x squared y plus x to the minus 2 y inverse x minus 2y x squared y inverse divided by y plus y inverse squared. So I can take the log of these equations, if you like, to get the recurrence relations for the parameters. For example, taking the log of that equation, you can see that I will get g prime. From here, I would get 2g plus some function. There's some delta g that depends on k and h. If I do the same thing here, you can see that I will get an h prime that is h plus some function from the log of this that depends on k and h. And finally, I will get k prime some function of k and h. This parameter g is not that important. It's basically an overall constant. The way that we started it, we certainly don't expect it to modify phase diagrams, et cetera. And you can see that it never affects the equations that govern h and k, the two parameters that give the relative weight of different configurations. Whether you are in a ferromagnet or a disordered state is governed by these two parameters. And indeed, we can ignore this. Essentially, what it amounts to is as follows, that every time I remove some of the spins, I gradually build a contribution to my free energy. Because clearly, once I have integrated out over all of the energies, what I will have will be the partition function. Its log would be the free energy. And actually, we encountered exactly the same thing when we were doing momentum space RG. There was always, as a result of the integration, some contribution that I call delta f that I never looked at because, OK, it's part of the free energy but does not govern the relative weights of the different configurations. So these are really the two things that we need to focus on. Now, if we did things correctly and these are correct equations, had I started in the subspace where h equals to zero, which had up-down symmetries, I could have changed all of my sigmas to minus sigma, and for h equals to zero, the energy would not have changed. So a check that we did things correctly is that if h equals to zero, then h prime has to be zero. So let's see. If h equals to zero or y is equal to one, well if y is equal to one, you can see that those two factors in the numerator and denominator are exactly the same. And so y prime stays to one. So this check has been performed. So if I am on h equals to zero on the subspace that has symmetry, then I have only one parameter, this k or x. And so on this space, the recursion relation that I have is that x prime to the fourth is x squared plus x to the minus 2 squared. I think I made a mistake here. No, it's correct. y plus y inverse. OK, but y is one. So this is divided by 2 squared, which is 4. But let me double check so that I don't go-- yeah. So this is correct. So I can write it in terms of x being e to the k. So what I have here is e to the 4k prime. I have here e to the 2k plus e to the minus 2k divided by 2, which is hyperbolic cosine of 2k, squared. So what I will get is k prime is 1/2 log hyperbolic cosine of 2k. Now, k is a parameter that we are changing. As we make k stronger, presumably things are more coupled to each other and should go more towards order. As k goes towards zero, basically the spins are independent. They can do whatever they like. So it kind of makes sense that there should be a fixed point at zero corresponding to zero correlation length. And let's check that. So if k is going to zero-- it's very small-- then k prime is 1/2 log hyperbolic cosine of a small number, is roughly 1 plus that small number squared over 2. There's a series like that. Log of 1 plus a small number is a small number. So this becomes 4k squared over 2 over 2. It's k squared. So it says that if I, let's say, start with a k that is 1/10, my k prime would be 1/100 and then 1/10,000. So basically, this is a fixed point that attracts everything to it. Essentially, what it says is you may have some weak amount of coupling. As you go and look at the spins that are further and further apart, the effective coupling between them is going to zero. And the spins that are further apart don't care about each other. They do whatever they like. So that all makes physical sense. Well, we are really interested in this other end. What happens at k going to infinity? We can kind of look at this equation. Or actually, look at this equation, also doesn't matter. But here, maybe it's clearer. So I have e to the 4k prime. And this is going to be dominated by this. This is e to the 2k. e to the minus 2k is very small. So it's going to be approximately e to the 4k divided by 4. So what I have out here is that k is very large. k prime is roughly k, but then smaller by a factor that is 1/2 of log 2. So we try to take two spins that are next to each other, couple them with a very strong k. Let's say a million, 10 million, whatever. And then I look at spins that are too further apart. They're still very strongly coupled, but slightly less. It's a million minus log 2. So it very, very gradually starts to move away from here. But then as it kind of goes further, it kind of speeds up. What it says is that it is true. At infinity, you have a fixed point. But that fixed point is unstable. And even if you have very strong but finite coupling, because things are finite, as you go further and further out, the spins become less and less correlated. This, again, is another indication of the statement that a one dimensional system will not order. So one thing that you can do to sort of make this look slightly better, since we have the interval from zero to infinity, is to change variables. I can ask what does tang of k prime do? So tang k prime is e to the 2k prime minus 1 divided by e to the 2k prime plus 1. You write e to the 2k's in terms of this variable. At the end of the day, a little bit of algebra, you can convince yourself that the recursion relation for tang k has a very simple form. The new tang k is simply the square of the old tang k. If I plot things as a function of tang k that runs between zero and one, that fixed point that was at infinity gets mapped into one, and the flow is always towards here. There is no other fixed point in between. Previously, it wasn't clear from the way that I had written whether potentially there's other fixed points along the k-axis. If you write it as t prime tang is t squared, where t stands for tang, clearly the only fixed points are at zero and one. But now this also allows us to ask what happens if you also introduce the field direction, h, in this space. Now, one thing to check is that if you start with k equals to zero, that is independent spins, you look at the equation here. k equals to 0 corresponds to x equals to one. This factor drops out. You see y prime is equal to y. So if you have no coupling, there is no reason why the magnetic field should change. So essentially, this entire line corresponds to fixed points. Every point here is a fixed point. And we can show that, essentially, if you start along field zero, you go here. If you start along different values of the field, you basically go to someplace along this line. And all of these flows originate, if you go and plot them, from back here at this fixed point that we identified before at h equals to zero. So in order to find out what is happening along that direction, all we need to do is to go and look at x going to infinity. With x going to infinity, you can see that y prime squared, the equation that we have for y, y prime squared is y squared. And then there's this fraction. Clearly, the terms that are proportional to x squared will overwhelm those proportional to x to the minus 2. So I will get a y and a y inverse from the numerator and denominator. And so this goes like y to the fourth. Which means that in the vicinity of this point, what you have is that h prime is 2h. So this, again, is irrelevant direction. And here, you are flowing in this direction. And the combination of these two really justifies the kind of flows that I had shown you before. So essentially, in the one dimensional model, you can start with any microscopic set of k and h's that you want. As you rescale, you essentially go to independent spins with an effective magnetic field. So let's say we start with very close to this fixed point. So we had a very large value of k. I expect if I have a large value of k, neighboring spins are really close to each other. I have to go very far before, if I have a patch of pluses, I can go over to a patch of minuses. So there's a large correlation length in the vicinity of this point. And that correlation length is a function of our parameters k and h. Now, the point is that the recursion relation that I have for k does not look like the type of recursions that I had before. This one is fine because I can think of this as my usual h prime is b to the yh h, where here I'm clearly dealing with a b equals to two, and so I can read off my yh to be one. But the recursion relation that I have for k is not of that form. But who said that I choose k as the variable that I put over here? I have been doing all of these manipulations going from k to x, et cetera. One thing that behaves nicely, you can see, is a variable if I call e to the minus k prime is square root of 2 e to the minus k. At k equals to infinity, this is something that goes to zero on both ends. But if its k is large but not finite, it says that on the rescaling, it changes by a factor of root 2. So if I say, well, what is c as a function of the magnetic field, rather than writing k, I will write it as e to the minus k. It's just another way of writing things. Well, I know that under one step of RG, all my length scales have shrunk by a factor of two. So this is twice the c that I should write down with 2h and root 2 e to the minus k. So I just related the correlation length before and after one step of RG, starting from very close to this point. And for these two factors, 2 and root 2, I use the results that I have over there. And this I can keep doing. I can keep iterating this. After l steps of RG, this becomes 2 to the l, c of 2 to the lh, and 2 to the l over 2 e to the minus k. I iterate this sufficiently so that I have moved away from this fixed point, where everything is very strongly correlated. And so that means I move, let's say, until this number is something that is of the order of one. Could be 2. Could be 1/2. I really don't care. But the number of iterations, the number of rescalings by a factor of two that I have to do in order to achieve this, is when 2 to the l is of the order of e to the k. e to the 2k, sorry. And if I do that, I find that c should be e to the 2k some function of h e to the 2k. So let's say I am at h equals to zero. And I make the strength of the interaction between neighbors stronger and stronger. If I asked you, how does the correlation length, how does the size of the patches that are all plus and minus change as k becomes stronger and stronger? Well, RG tells you that it goes in this fashion. In the problem set that you have, you will solve the problem exactly by a different method and get exactly this form. What else? Well, we said that one of the characteristics of a system is that the free energy has a singular part as a function of the variables that we have that is related to the correlation length to the power d. In this case, we have d equals to one. So the statement is that the singular part of the free energy, as you approach infinite coupling strength, behaves as e to the minus 2k some other function of h, e to the 2k. Once you have a function such as this, you can take two derivatives with respect to the field to get the susceptibility. So the susceptibility would go like two derivatives of the free energy with respect to the field. You can see that each derivative brings forth a factor of e to the 2k. So two derivatives will bring a factor of e to the 2k, if I evaluate it at h equals to zero. So the statement is that if I'm at zero field, and I put on a small infinitesimal magnetic field, it tends to overturn all of the spins in the direction of the field. And the further you are close to k to infinity, there are larger responses that you would see. The susceptibility of the response diverges as k goes to infinity. So in some sense, this model does not have phase transition, but it demonstrates some signatures of the phase transition, such as diverging correlation length and diverging susceptibility if you go to very, very strong nearest neighbor coupling. There is one other relationship that we had. That is, the susceptibility is an integral in d dimension over the correlation length, which we could say in one dimension-- well, actually, let's write it in d dimension. e to the minus x over c divided by x. We introduce an exponent, eta, to describe the decay of correlations. So this would generally behave like c to the 2 minus eta. Now, in this case, we see that both the correlation length and susceptibility diverge with the same behavior, e to the 2k. They're proportional to each other. So that immediately tells me that for the one dimensional system that we are looking at, eta is equal to one. And if I substitute back here, so eta is one, the dimension is one, and the two cancels. Essentially, it says that the correlation length in one dimension have a pure exponential decay. They don't have this sub leading power load that you would have in higher dimensions. So when you do things exactly, you will also be able to verify that. So all of the predictions of this position space RG method that we can carry out in this one dimensional example very easily, you can also calculate and obtain through the method that is called transfer matrix, and is the subject of the problem set that was posted yesterday. Also, you can see that this approach will work for any system in one dimension. All I really need to ensure is that the b that I write down is sufficiently general to include all possible interactions that I can write between two spins. Because if I have some subset of those interactions, and then I do this procedure that I have over here and then take the log, there's no reason why that would not generate all things that are consistent with symmetry. So you really have to put all possible terms, and then you will get all possible terms here, and there would be a recursion relation that would relate. You can do this very easily, for example, for the Potts model. For the continuous spin systems, it becomes more difficult. Now let's say briefly as to why we can solve things exactly, let's say, for the one dimensionalizing model by this procedure. But this procedure we cannot do in higher dimensions. So let's, for example, think that we have a square lattice. Just generalize what we had to two dimensions. And let's say that we want to eliminate this spin and-- well, let's see, what's the best way? Yeah, we want to eliminate a checkerboard of spins. So we want to eliminate half of the spins. Let's say the white squares on a checkerboard. And if I were to eliminate these spins, like I did over here, I should be able to generate interactions among spins that are left over. So you say, fine. Let's pick these four spins. Sigma 1, sigma 2, sigma 3, sigma 4, that are connected to this spin, s, that is sitting in the middle, that I have to eliminate. So let's also stay in the space where h equals to zero, just for simplicity. So the result of eliminating that is I have to do a sum over s. I have e to the-- let's say the original interaction is k, and s is coupled by these bonds to sigma 1, sigma 2, sigma 3, sigma 4. Now, summing over the two possible values of s is very simple. It just gives me e to the k times the sum plus e to the minus k, that sum. So it's the same thing as 2 hyperbolic cosine of k, sigma 1 plus sigma 2 plus sigma 3 plus sigma 4. We say good. I had something like that, and I took a log so that I got the k prime. So maybe I'll do something like a recasting of this, and maybe a recasting of this will give me some constant, will give me some kind of a k prime, and then I will have sigma 1, sigma 2, sigma 2, sigma 3, sigma 3, sigma 4, sigma 4, sigma 1. But you immediately see that that cannot be the entire story. Because there is really no ordering among sigma 1, sigma 2, sigma 3, sigma 4. So clearly, because of the symmetries of this, you will generate also sigma 1 sigma 3 plus sigma 2 sigma 4. That is, eliminating this spin will generate for you interactions among these, but also interactions that go like this. And in fact, if you do it carefully, you'll find that you will also generate an interaction that is product of all four spins. You will generate something that involves all of the force. So basically, because of the way of geometry, et cetera, that you have in higher dimensions, there is no trick that is analogous to what we could do in one dimension where you would eliminate some subset of spins and not generate longer and longer range interactions, interactions that you did not have. You could say, OK, I will start including all of these interactions and then have a higher, larger parameter space. But then you do something else, you'll find that you need to include further and further neighboring interactions. So unless you do some kind of termination or approximation, which we will do next time, then there is no way to do this exactly in higher dimensions. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: I mean, are you putting any sort of weight on the fact that, for example, sigma 1 and sigma 3 are farther apart than sigma 1 and sigma 2, or are we using taxi cab distances on this lattice? PROFESSOR: Well, we are thinking of an original model that we would like to solve, in which I have specified that things are coupled only to nearest neighbors. So the ones that correspond to sigma 1, sigma 3, are next nearest neighbors. They're certainly farther apart on the lattice. You could say, well, there's some justification, if these are ions and they have spins, to have some weaker interaction that goes across here. There has to be some notion of space. I don't want to couple everybody to everybody else equivalently. But if I include this, then I have further more complicated Hamiltonian. And when I do RG, I will generate an even more complicated Hamiltonian. Yes. AUDIENCE: [INAUDIBLE]. PROFESSOR: Question is, suppose I have a square lattice. Let's go here. And the suggestion is, why don't I eliminate all of the spins over here, maybe all of the spins over here? So the problem is that I will generate interactions that not only go like this, but interactions that go like this. So the idea of what happens is that imagine that there's these spins that you're eliminating. As long as there's paths that connect the spins that you're eliminating to any other spin, you will generate that kind of couple. Again, the reason that the one dimensional model works is also related to its exact solvability by this transfer matrix method. So I will briefly mention that in the last five minutes. So for one dimensional models, the partition function is a sum over whatever degree of freedom you have. Could be Ising, Potts, xy, doesn't matter. But the point is that the interaction is a sum of bonds that connect one site to the next site. I can write this as a product of e to the b of SI and SI plus 1. Now, I can regard this entity-- let's say I have the Potts model q values. This is q possible values. This is q possible values. So there are q squared possible values of the interaction. And there is q squared possible values of this Boltzmann weight. I can regard that as a matrix, but I can write in this fashion. And so what you have over there, you can see is effectively you have s1, ts2, s2, ts3, and so forth. And if I use periodic boundary condition like the one that I indicated there so that the last one is connected to the first one, and then I do a sum over all of these s's, this is just the product of two matrices. So this is going to become, when I sum over s2, s1 t squared s3 and so forth. You can see that the end result is trace of the matrix t raised to the power of n. Now, the trace you can calculate in any representation of the matrix. If you manage to find the representation where t is diagonal, then the trace would be sum over alpha lambda alpha to the n, where these lambdas are the eigenvalues of this matrix. And if n is very large, the thermodynamic limit that we are interested, it will be dominated by the largest eigenvalue. Now, if I write this for something like Potts model or any of the spin models that I had indicated over there, you can see that all elements of this matrix being these Boltzmann weights are plus, positive. Now, there's a theorem called Frobenius's theorem, which states that if you have a matrix, all of its eigenvalues are positive, then the largest eigenvalues is non-degenerate. So what that means is that if this matrix is characterized by a set of parameters, like our k's, et cetera, and I change that parameter, k, well the eigenvalue is obtained by looking at a single matrix. It doesn't know anything about that. The only way that the eigenvalue can become singular is if two eigenvalues cross each other. And since Frobenius's theorem does not allow that, you conclude that this largest eigenvalue has to be a perfectly nice analytical function of all of the parameters that go into constructing this Hamiltonian. And that's a mathematical way of saying that there is no phase transition for one dimensional model because you cannot have a crossing of eigenvalues, and there is no singularity that can take place. Now, an interesting then question or caveat to that comes from the very question that was asked over here. What if I have, let's say, a two dimensional model, and I regard it essentially as a complicated one dimensional model in which I have a complicated multi-variable thing over one side, and then I can go through this exact same procedure over here also? And then I would have to diagonalize this huge matrix. So if this is l, it would be a 2 to the l by 2 to the l matrix. And you may naively think that, again, according to Frobenius's theorem, there should be no phase transition. Now, this is exactly what Lars Onsager did in order to solve the two dimensionalizing model. He constructed this matrix and was clever enough to diagonalize it and show that in the limit of l going to infinity, then the Frobenius's theorem can and will be violated. And so that's something that we will also discuss in some of our future lectures. Yes. AUDIENCE: But it won't be violated in the one dimensional case, even if n goes to infinity? PROFESSOR: Yeah, because n only appears over here. Lambda max is a perfectly analytic function. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 6_The_Scaling_Hypothesis_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Let's start. So if we have been thinking about critical points. And these arise in many phased diagrams such as that we have for the liquid gas system where there's a coexistence line, let's say, between the gas and the liquid that terminate, or we looked in the case of a magnet where as a function of [INAUDIBLE] temperature there was in some sense coexistence between magnetizations in different directions terminating at the critical point. So why is it interesting to take it a whole phase diagram that we have over here? For example, for this system, we can also have solid, et cetera. AUDIENCE: So isn't this [INAUDIBLE] [INAUDIBLE] PROFESSOR: [INAUDIBLE]. Yes. Thank you. And focus on just the one point. In the vicinity of this one point. And the reason for that was this idea of universality. There many things that are happening in the vicinity of this point as far as singularitities, correlations, et cetera, are concerned that are independent of whatever the consequence of the system are. And these singularities, we try to capture through some scaling laws for the singularities. And I've been kind of constructing a table of your singularities. Let's do it one more time here. So we could look at system such as the liquid gas-- so let's have here system-- and then we could look at the liquid gas. And for that, we can look at a variety of exponents. We have alpha, beta, gamma, delta, mu, theta. And for the liquid gas, I write you some numbers. The heat capacity diverges with an exponent that is 0.11-- slightly more accurate than I had given you before. The case for beta is 0.33 gamma. OK. I will give you a little bit more digits just to indicate the accuracy of experiments. This is 1.238 minus plus 0.012. So these exponents are obtained by looking at the fluid system with light scattering-- doing this critical opalescence that we were talking about in more detail and accurately, Delta is 4.8. The mu is, again from light scattering 0.629 minus plus 0.003. Theta is 0.032 0 minus plus 0.013. And essentially these three are [INAUDIBLE] light scattered. Sorry. Another case that I mentioned is that of the super fluid. And in this general construction of the lambda gives [INAUDIBLE] theories that we had, liquid gas would be in question one. Superfluid would be in question two. And I just want to mention that actually the most experimentally accurate exponent that has been determined is the heat capacity for dry superfluid helium transition. I had said that it kind of looks like a logarithmic divergence. You look at it very closely. And it is in fact a cusp, and does not diverge all the way to infinity, so it corresponds to a slightly negative value of alpha, which is the 0.0127 minus plus 0.0003. And the way that this has been data-mined is they took superfluid helium to the space shuttle, and this experiments were done away from the gravity of the earth in order to not to have to worry about the density difference that we would have across the system. Other exponents that you have for this system-- let me write down-- beta is around 0.35. Gamma is 1.32. Delta is 4.79. Mu is is 0.67. Theta is 0.04. And we don't need this for system. Any questions we could do players kind of add the exponents here I've booked usability even if it's minus 1 is a research data 0.7 down all those are long this is more to say about new ideas and so on I think is that these numbers aren't you think that is simplest way for us is net my position and the question is why these numbers are all of the same as all the systems is therefore profound. These are dimensionless numbers. So in some sense, it is a little bit of mathematics. It's not like you calculate the charge of the electron and you get a number. These don't depend on a specific material. Therefore, what is important about them is that they must somehow be capturing some aspect of the collective behavior of all of these degrees of freedom, in which the details of what the degrees of freedom are is not that important. Maybe the type of synergy rating is important. So unless we understand and derive these numbers, there is something important about the collective behavior of many degrees of freedom that we have not understood. And it is somehow a different question if you are thinking about phase transitions. So let's say you're thinking about superconductors. There's a lot of interest in making high temperature superconductor pushing TC further and further up. So that's certainly a material problem. We are asking a different problem. Why is it, whether you have a high temperature superconductor or any other type of system, the collective behavior is captured by the same set of exponents. So in an attempt to try to answer that, we did this Landau-Ginzburg and try to calculate its singular behavior using this other point of approximation. And the numbers that we got, alpha was 0, meaning that there was discontinuity. Beta was 1/2, gamma was 1, delta was 3, my was 1/2, theta was 0, which don't quite match with these numbers that we have up there. So question is, what should you do? We've made an attempt and that attempt was not successful. So we are going to completely for a while forget about that and try to approach the problem from a different perspective and see how far we can go, whether we can gain any new insights. So that new approach I put on there the name of the scaling hypothesis. And the reason for that will become apparent shortly. So what we have in common in both of these examples is that there is a line where there are discontinuities in calculating some thermodynamic function that terminates at a particular point. And in the case of the magnetic system, we can look at the singularities approaching that point either along the direction that corresponds to change in temperature and parametrize that through heat, or we can change the magnetic field and approach the problem from this other direction. And we saw that there were analogs for doing so in the liquid gas system also. And in particular, let's say we calculated a magnetization, we found that there was one form of singularity coming this way, one form of singularity coming that way. We look at the picture for the liquid gas system that I have up there, and it's not necessarily clear which direction would correspond to this nice symmetry breaking or non-symmetry breaking that you have for the magnetic system. So you may well ask, suppose I approach the critical point along some other direction. Maybe I come in along the path such as this. I still go to the critical point. We can imagine that for the liquid gas system. And what's the structure of the singularities? I know that there are different singularities in the t and h direction. What is it if I come and approach the system along a different direction, which we may well do for a liquid gas system? Well, we could actually answer that if we go back to our graph saddlepoint approximation. In the saddlepoint approximation, we said that ultimately, the singularities in terms of these two parameters t and h-- so this is in the saddlepoint. Part obtained by minimizing this function that was appearing in the expansion in the exponent. There was a t over 2m squared. There was a mu n to the 4th. And there is an hm. So we had to minimize this with respect to m. And clearly, what that gives us is m. If I really solve the equation, that corresponds to this minimization, which is a function of t and h. And in particular, approaching two directions that's indicated, if I'm along the direction where h equals 0, I essentially balance these two terms. Let's just write this as a proportionality. I don't really care about the numbers. Along the direction where h equals 0, I have to balance m to the 4th and tm squared. So m squared will scale like e. m will scale like square root of t. And more precisely, we calculated this formula for t negative and h equals to 0. If I, on the other hand, come along the direction that corresponds to t equals to 0, along that direction I don't have a first term. I have to balance um the 4th and hm. So we immediately see that m will scale like h over u. In fact, more correctly h over 4u to the power of one third. You substitute this in the free energy and you find that the singular part of the free energy as a function of t and h in this saddlepoint approximation has the [INAUDIBLE] to the form of proportionality. If I substitute this in the formula for t negative, I will get something like minus t squared over 4 we have-- forget about the number t squared over u. If I go along the t equals to 0 direction, substitute that over there, I will get n to the 4th. I will get h to the 4 thirds divided by mu to the one third. Even the mu dependence I'm not interested. I'm really interested in the behavior close to t and h as a function of t and h. Mu is basically some non-universal number that doesn't go to 0. I could in some sense capture these two expressions by a form that is t squared and then some function-- let's call it g sub f which is a function of-- let's see how I define the delta h over t to the delta. So my claim is that I toyed with the behavior coming across these two different special direction. In general, anywhere else where t and h are both nonzero, the answer for m will be some solution of a cubic equation, but we can arrange it to only be a function of h over [INAUDIBLE] and have this form. Now I could maybe rather than explicitly show you how that arises, which is not difficult-- you can do that-- since there's something that we need to do later on, I'll show it in the following manner. I have not specified what this function g sub f is. But I know its behavior along h equals to 0 here. And so if I put h equals to 0, the argument of the function goes to 0. So if I say that the argument of the function is a constant-- the constant let's say is minus 1 over u on one side, 0 on the other side, then everything's fine. So I have is that the limit as its argument goes to 0 should be some constant. Well, what about the other direction? How can I reproduce from a form such as this the behavior when t equals to 0? Because I see that when t equals to 0, the answer of course cannot depend on t itself, but as a power law as a function of h. Is it consistent with this form? Well, as t goes to 0 in this form, the numerator here goes to 0, the argument of the function goes to infinity, I need to know something about the behavior of the function of infinity. So let's say that the limiting behavior as the argument of the function goes to infinity of gf of x is proportional to the argument to some other peak. And I don't know where that power is. Then if I look at this function, the whole function in this limit where t goes to 0 will behave. There's a t squared out front the goes to 0, the argument of the function goes to infinity. So the function will go like the argument to some power. So I go like h t to the delta to some other peak. So what do I know? I know that the answer should really be proportional to h to the four thirds. So I immediately know that my t should be four thirds. But what about this delta? I never told you what delta was. Now I can figure out what delta is, because the answer should not depend on t. t has gone to 0. And so what power of t do I have? I have 2 minus delta p should be 0. So my delta should be 2 over p, 2 over four thirds, so it should be three halves. Why is this exponent relevant to the question that I had before? You can see that the function that describes the free energy as a function of these two coordinates. If I look at the combination where h and t are non-zero, is very much dependent on this h divided by t to the delta, and that delta is three halves. So, for example, if I were to draw here curves where h goes like 3 to the three halves-- it's some coefficient, I don't know what that coefficient is-- then essentially, everything that is on the side that hogs the vertical axis behaves like the h singularity. Everything that is over here depends like a t singularity. So a path that, for example, comes along a straight line, if I, let's say, call the distance that I have to the critical point s, then t is something like s cosine of theta. h is something like s sine theta. You can see however that the information h over t to the delta as s goes to 0 will diverge, because I have other three halves down here for s that will overcome the linear cover I have over there. So for any linear path that goes through the critical point, eventually for small s I will see the type of singularity that is characteristic of the magnetic field if the exponents are according to this other point. We have this assumption, of course. But if I therefore knew the correct delta for all of those systems, I would be also able to answer, let's say for the liquid gas, whether if I take a linear path that goes through the critical point I would see one set of singularities or deltas that have singularities. So this delta which is called a gap exponent, gives you the answer to that. But of course I don't know the other exponents. There is no reason for me to trust the gap exponent that I obtained in this fashion. So what I say is let's assume that for any critical point, the singular part of the free energy on approaching the critical point which depends on this pair of coordinates has a singular behavior that is similar to what we had over here, except that I don't know the exponent. So rather than putting 2 t squared, I write t to the 2 minus alpha for reason that will become apparent shortly, and some function of h t to the delta and for some alpha and delta. So this is certainly already an assumption. This mathematically corresponds to having homogeneous functions. Because if I have a function of x and y, I can certainly write lots of functions such as x squared plus y squared plus a constant plus x cubed y cubed that I cannot rearrange into this form. But there are certain functions of x and y that I can rearrange so that I can pull out some factor of let's say x squared out front, and everything that is then in a series is a function of let's say y over x cubed. Something like that. So there's some class of functions of two arguments that have this homogeneity. So we are going to assume that the singular behavior close to the critical point is described by such a function. That's an assumption. But having made that assumption, let's follow its consequence and let's see if we learned something about that table of exponents. Now the first thing to note is clearly I chose this alpha over here so that when I take two derivatives with respect to t, I would get something like a heat capacity, for which I know what the divergence is. That's a divergence called alpha. But there's one thing that I have to show you is that when I take a derivative of one of these homogeneous functions, with respect to one of its arguments, I will generate another homogeneous function. If I take one derivative with respect to t, that derivative can either act on this, leaving the function unchanged, or it can act on the argument of the function and give me d to the 2 minus alpha. I will have minus h t to the power of delta plus 1. There will be a factor of delta and then I will have the derivative function ht to the delta. So I just took derivatives. I can certainly pull out a factor of t to the 1 minus alpha. Then the first term is just 2 minus alpha times the original function. The second term is minus delta h divided by t to the delta. Because I pulled out the 1 minus alpha, this t gets rid of the factor of 1 there. And I have the derivative. So this is completely different function. It's not the derivative of the original function. But whatever it is it is still only a function of the combination h over t to the delta. So the derivative of a homogeneous function is some other homogeneous function. Let's call it g2. It doesn't matter. Let's call it g1 ht to the delta. And this will happen if I take a second derivative. So I know that if I take two derivatives, I will get t to the minus alpha. I will basically drop two factors over there. And then some other function, ht to the delta. Clearly again, if I say that I'm looking at the line where h equals to zero for a magnet, then the argument of the function goes to 0. If I say that the function of the argument goes to 0 is a constant, like we had over here, then I will have the singularity t to the minus alpha. So I've clearly engineered whatever the value of alpha is in this table, I can put over here and I have the right singularity for the heat capacity. Essentially I've put it there by hand. Let me comment on one other thing, which is when we are looking at just the temperature, let's say we are looking at something like a superfluid, the only parameter that we have at our disposal is temperature and tens of ITC. Let's say we plug the heat capacity and then we see divergence of the heat capacity on the two sides. Who said that I should have the same exponent on this side and on this side? So we said that generally, in principle, I could say I would do that. And in principle, there is no problem with that. If there is function that has one behavior here, another behavior there, who says that two exponents have to be the same? But I have said something more. I have said that in all of the cases that I'm looking at, I know that there is some other axis. And for example, if I am in the liquid gas system, I can start from down here, go all the way around back here without encountering a singularity. I can go from the liquid all the way to gas without encountering a singularity. So that says that the system is different from a system that, let's say, has a line of singularities. So if I now take the functions that in principle have two different singularities, t to the minus alpha minus t to the minus alpha plus on the h equals to 0 axis and try to elevate them into the entire space by putting this homogeneous functions in front of them, there is one and only one way in which the two functions can match exactly on this t equals to 0 line, and that's if the two exponents are the same and you are dealing with the same function. So that we put in a bit of physics. So in principle, mathematically if you don't have the h axis and you look at the one line and there's a singularity, there's no reason why the two singularities should be the same. But we know that we are looking at the class of physical systems where there is the possibility to analytically go from one side to the other side. And that immediately imposes this constraint that alpha plus should be alpha minus, and one alpha is in fact sufficient. And I gave you the correct answer for why that is. If you want to see the precise mathematical details step by step, then that's in the notes. So fine. So far we haven't learned much. We've justified why the two alphas should be the same above and below, but we put the alpha, the one alpha, then by hand. And then we have this unknown delta also. But let's proceed. Let's see what other consequence emerge, because now we have a function of two variables. I took derivatives in respect to t. I can take derivatives with respect to m. And in particular, the magnetization m as a function of t and h is obtained from a derivative of the free energy with respect to h. There's potential. It's the response to adding a field could be some factor of beta c or whatever. It's not important. The singular part will come from this. And so taking a derivative of this function I will get t this to the 2 minus alpha. The derivative of a can be respect to h, but h comes in the combination h over t to the delta will bring down a factor of minus delta up front. Then the derivative function-- let's call it gf1, for example. So now I can look at this function in the limit where h goes to 0, climb along the coexistence line, h2 goes to 0. The argument of the function has gone to 0. Makes sense that the function should be constant when its argument goes to 0. So the answer is going to be proportional to t to the 2 minus alpha minus delta. But that's how beta was defined. So if I know my beta and alpha, then I can calculate my delta from this exponent identity. Again, so far you haven't done much. You have translated two unknown exponents, this singular form, this gap exponent that we don't know. I can also look at the other limit where t goes to 0 that is calculating the magnetization along the critical isotherm. So then the argument of the function has gone to infinity. And whatever the answer is should not depend on t, because I have said t goes to 0. So I apply the same trick that I did over here. I say that when the argument goes to infinity, the function goes like some power of its argument. And clearly I have to choose that power such that the t dependence, since t is going to 0, I have to get rid of it. The only way that I can do that is if p is 2 minus alpha minus delta divided by that. So having done that, the whole thing will then be a function of h to the p. But the shape of the magnetization along the critical isotherm, which was also the shape of the isotherm of the liquid gas system, we were characterizing by an exponent that we were calling 1 over delta. So we have now a formula that says my delta shouldn't in fact be the inverse of p. It should be the delta 2 minus alpha minus delta. Yes? AUDIENCE: Why isn't the exponent t minus 1 after you've differentiated [INAUDIBLE]? Because g originally was defined as [INAUDIBLE]. PROFESSOR: Let's call it pr. Because actually, you're right. If this is the same g and this has particular singularity [INAUDIBLE]. But at the end of the day, it doesn't matter. So now I have gained something that I didn't have before. That is, in principle I hit alpha and beta, my two exponents, I'm able to figure out what delta is. And actually I can also figure out what gamma is , because gamma describes the divergence of the susceptibility. [INAUDIBLE] which is the derivative of magnetization with respect to field, I have to take another derivative of this function. Taking another derivative with respect to h will bring down another factor of delta. So this becomes minus 2 delta. Some other double derivative function h 2 to the delta. And susceptibilities, we are typically interested in the limit where the field goes to 0. And we define them to diverge with exponent gamma. So we have identified gamma to be 2 delta plus alpha minus 2. So we have learned something. Let's summarize it. So the consequences-- one is we established that same critical exponents above and below. Now since various quantities of interest are obtained by taking derivatives of our homogeneous function and they turn into homogeneous functions, we conclude that all quantities are homogeneous functions of the same combination ht to the delta. Same delta governs it. And thirdly, once we make this answer our assumption for the free energy, we can calculate the other exponents on the table. So all, of almost all other exponents related to 2, in this case alpha and delta. Which means that if you have a number of different exponents that all depend on 2, there should be some identities, exponent identities. It's these numbers in the table, we predict if all of this is varied have some relationships with t. So let's show a couple of these relationships. So let's look at the combination alpha plus 2 beta plus gamma. Measurement of heat capacity, magnetization, susceptibility. Three different things. So alpha is alpha 2. My beta up there is 2 minus alpha minus delta. My gamma is 2 delta plus alpha minus 2. We got algebra. There's one alpha minus 2 alpha plus alpha. Alpha is cancelled. Minus 2 deltas plus 2 deltas then it does cancel. I have 2 times 2 minus 2, so that 2. So the prediction is that you take some line on the table, add alpha, beta, 2 beta plus gamma, they should add up to one. So let's pick something. Let's pick a first-- actually, let's pick the last line that has a negative alpha. So let's do n equals to 3. For n equals to 3 I have alpha which is minus .12. I have twice beta, that is .37, so that becomes 74. And then I have gamma, which is 1.39. So this is 74. I have 9 plus 413 minus 2, which is 1. I have 3 plus 7, which is 10, minus 1, which is 9. But then I had a 1 that was carried over, so I will have 0. So then I have 1, so 201. Not bad. Now this goes by the name of the Rushbrooke identity. The Rushbrooke made a simple manipulation based on thermodynamics and you have a relationship with these. Let's do another one. Let's do delta and subtract 1 from it. What is my delta? I have delta to the delta 2 plus alpha minus delta. This is small delta versus big delta. And then I have minus 1. Taking that into the numerator with the common denominator of 2 plus alpha minus delta, this minus delta becomes plus delta, which this becomes 2 delta minus alpha minus 2. 2 delta AUDIENCE: Should that be a minus alpha in the denominator? PROFESSOR: It better be. Yes. 2 delta plus alpha minus 2. Then we can read off the gamma. So this is gamma over beta. And let's check this, let's say for m equals to 2. No, let's check it for m plus 21, for the following reason, that for n equals to 1, what we have for delta is 4.8 minus 1, which would be 3.8. And on the other side, we have gamma over beta. Gamma is 1.24, roughly, divided by beta .33, which is roughly one third. So I multiply this by 3. And that becomes 3.72. This one is known after another famous physicist, Ben Widom, as the Widom identity. So that's nice. We can start learning that although we don't know anything about this table, these are not independent numbers. There's relationship between them. And they're named after famous physicists. Yes? AUDIENCE: Can we briefly go over again what extra assumption we had put in to get these in and these out? Is it just that we have this homogeneous function [INAUDIBLE]? PROFESSOR: That's right. So you assume that the singularity in the vicinity of the critical point as a function of deviations from that critical point can be expressed as a homogeneous function. The homogeneous function, you can rearrange any way you like. One nice way to rearrange it is in this fashion. It will depend, the homogeneous function on two exponents. I chose to write it as 2 minus alpha so that one of the exponents would immediately be alpha. The other one I couldn't immediately write in terms of beta or gamma. I had to do these manipulations to find out what the relationship [INAUDIBLE]. But the physics of it is simple. That is, once you know the singularity of a free energy, various other quantities you obtain by taking derivatives of the free energy. That's [INAUDIBLE] And so then you would have the singular behavior of [INAUDIBLE]. So I started by saying that all other exponents, but then I realized we have nothing so far that tells us anything about mu and eta. Because mu and eta relate to correlations. They are in microscopic quantities. Alpha, beta, gamma depend on macroscopic thermodynamic quantities, magnetization susceptibility. So there's no way that I will be able to get information, almost. No easy way or no direct way to get information about mu and eta. So I will go to assumption 2.0. Go to the next version of the homogeneity assumption, which is to emphasize that we certainly know, again from physics and the relationship between susceptibility and correlations, that the reason for the divergence of the susceptibility is that the correlations become large. So we'll emphasize that. So let's write our ansatz not about the free energy, but about the correlation length. So let's replace that ansatz with homogeneity of correlation length. So once more, we have a structure where is a line that is terminate when two parameters, t and h go to 0. And we know that on approaching this point, the system will become cloudy. There's a correlation length that diverges on approaching that point a function of these two arguments. I'm going to make the same homogeneity assumption for the correlation length. And again, this is an assumption. I say that this is a to the minus mu. The exponent mu was a divergence of the correlation length. Some other function, it's not that first g that we wrote. Let's call it g psi of ht to the delta. So we never discussed it, but this function immediately also tells me if you approach the critical point along the criticalizer term, how does the correlation length diverge through the various tricks that we have discussed? But this is going to be telling me something more if from here, I can reproduce my scaling assumption 1.0. So there is one other step that I can make. Assume divergence of c is responsible-- let's call it even solely responsible-- for singular behavior. And you say, what does all of this mean? So let's say that I have a system could be my magnet, could be my liquid gas that has size l on each search. And I calculate the partition function log z. Log z will certainly have the part that is regular. Well-- log z will have a part that is certainly-- let's say the contribution phonons, all kinds of other regular things that don't have anything to do with singularity of the system. Those things will give you some regular function. But one thing that I know for sure is that the answer is going to be extensive. If I have any nice thermodynamic system and I am in v dimensions, then it will be proportional to the volume of that system that I have. Now the way that I have written it is not entirely nice, because log z is-- a log is a dimensionless quantity. Maybe I measured my length in meters or centimeters or whatever, so I have dimensions here. So it makes sense to pick some landscape to dimensionalize it before multiplying it by some kind of irregular function of whatever I have, t and h, for example. But what about the singular part? For the singular part, the statement was that somehow it was a connective behavior. It involved many, many degrees of freedom. We saw for the heat capacity of the solid at low temperatures, it came from long wavelength degrees of freedom. So no lattice parameter is going to be important. So one thing that I could do, maintaining extensivity, is to divide by l over c times something. So that's the only thing that I did to ensure that extensivity is maintained when I have kind of benign landscape, but in addition a landscape that is divergent. Now you can see that immediately that says that log z singular as a function of t and h, will be proportional to c to the minus t. And using that formula, it will be proportional to t to the du, some other scaling function. And it's go back to gf ht to the delta. Physically, what it's saying is that when I am very close, but not quite at the critical point, I have a long correlation length, much larger than microscopic length scale of my system. So what I can say is that within a correlation length, my degrees of freedom for magentization or whatever it is are very much coupled to each other. So maybe what I can do is I can regard this as an independent lock. And how many independent locks do I have? It is l over c to the d. So the statement roughly is a part of the assumption is that this correlation length that is getting bigger and bigger. Because things are correlated, the number of independent degrees of freedom that you are having gets smaller and smaller. And that's changing the number of degrees of freedom is responsible for the singular behavior of the free energy. If I make this assumption about this correlation then diverges, then I will get this form. So now my ansatz 2.0 matches my ansatz 1.0 provided du is 2 minus alpha. So I have du2 plus 2 minus alpha which is known after Brian Josephson, so this is the Josephson relation. And it is different from the other exponent identities that we have because it explicitly depends on the dimensionality of space. d appears in the problem. It's called hyperscale for that reason. Yes? AUDIENCE: So does the assumption that the divergence in c is solely responsible for the singular behavior, what are we excluding when we assume that? What else could happen that would make that not true? PROFESSOR: Well, what is appearing here maybe will have some singular function of t and h. AUDIENCE: So this similar to what we were assuming before when we said that our free energy could have some regular part that depends on [INAUDIBLE] the part that [INAUDIBLE]. PROFESSOR: Yes, exactly. But once again, the truth is really whether or not this matches up with experiments. So let's, for example, pick anything in that table, v equals to t. Let's pick n goes to 2, which we haven't done so far. And so what the formula would say is 3 times mu. Mu for the superfluid is 67 is 2 minus-- well, alpha is almost 0 but slightly negative. So it is 0.01. And what do we have? 3 times 67 is 2.01. So it matches. Actually, we say, well, why do you emphasize that it's the function of dimension? Well, a little bit later on in the course, we will do an exact solution of the so-called 2D Ising model. So this is a system that first wants to be close to 2, n equals to 1. And it was an important thing that people could actually solve an interacting problem, not in three dimensions but in two. And the exponents for that, alpha is 0, but it really is a logarithmic divergence. Beta is 1/8. Gamma is 7/4, delta is 15, mu is 1, and eta is 1/4. And we can check now for this v equals to 2 n equals to 1 that we have two times our mu, which exactly is known to be 1 is 2 minus logarithmic divergence corresponding to 0. So again, there's something that works. One thing that you may want to see and look at is that the ansatz that we made first also works for the result of saddlepoint, not surprisingly because again in the saddlepoint we start with a singular free energy and go through all this. But it does not work for this type of scaling, because 2 minus alpha would be 0 is not equal to d times one half, except in the case of four dimensions. So somehow, this ansatz and this picture breaks down within the saddlepoint approximation. If you remember what we did when we calculated fluctuation corrections for the saddlepoint, you got actually an exponent alpha that was 2 minus mu over 2. So the fluctuating part that we get around the saddlepoint does satisfy this. But on top of that there's another part that is doe to the saddlepoint value itself that violates this hyperscaling solution. Yes? AUDIENCE: Empirically, how well can we probe the dependence on dimensionality that we're finding in these expressions? PROFESSOR: Experimentally, we can do d equals to 2 d equals to 3. And computer simulations we can also do d equals to 2 d equals to 3. Very soon, we will do analytical expressions where we will be in 3.99 dimensions. So we will be coming down conservatively around 4. So mathematically, we can play tricks such as that. But certainly empirically, in the sense of experimentally we are at a disadvantage in those languages. OK? So we are making progress. We have made our way across this table. We have also an identity that involves mu. But so far I haven't said anything about eta. I can say something about the eta reasonably simply, but then you try to build something profound based on that. So let's look at exactly at tc, at the critical point. So let's say you are sitting at t and h equals to 0. You have to prepare your system at that point. There's nothing physically that says you can't. At that point, you can look at correlations. And the exponent eta for example is a characteristic of those correlations. And one of the things that we have is that m of x m of 0, the connected parts-- well, actually at the critical point we don't even have to put the connected part because the average of n is going to be 0. But this is a quantity that behaves as 1 over the separation that's actually include two possible points, x minus y. When we did the case of the fluctuations at the critical point within the saddlepoint method, we found that the behavior was like the Coulomb law. It was falling off as 1x to the d minus 2. But we said that experiment indicated that there is a small correction for this that we indicate with exponent eta. So that was how the exponent eta was defined. So can we have an identity that involves the exponent eta? We actually have seen how to do this already. Because we know that in general, the susceptibilities are related to integrals of the correlation functions. Now if I put this power law over here, you can see that the answer is like trying to be integrate x squared all the way to infinity down. So it will be divergent and that's no problem. At the critical point we know that the susceptibility is divergent. But you say, OK, if I'm away from the critical point, then I will use this formula, but only up to the correlation length. And I say that beyond the correlation length, then the correlations will decay exponentially. That's too rapid a falloff, and essentially the only part that's contributing is because what was happening at the critical point. Once I do that, I have to integrate ddx over x to the d minus 2 plus eta up to the correlation length. The answer will be proportional to the correlation length to the power of 2 minus eta. And this will be proportional to p to the power of c goes [INAUDIBLE] to the minus mu. 2 minus eta times mu. But we know that the susceptibilities diverge as t to the minus gamma. So we have established an exponent identity that tells us that gamma is 2 minus eta times mu. And this is known as the Fisher identity, after Michael Fisher. Again, you can see that in all of the cases in three dimensions that we are dealing with, exponent eta is roughly 0. It's 0.04 And all of our gammas are roughly twice what our mus are in that table. It's time we get that table checked. The one case that I have on that table where eta is not 0 is when I'm looking at v positive 2 where eta is 1/4. So I take 2 minus 1/4, multiply it by the mu that is one in two dimension, and the answer is the 7/4, which we have for the exponent gamma over there. So we have now the identity that is applicable to the last exponents. So all of this works. Let's now take the conceptual leap that then allows us to do what we will do later on to get the exponents. Basically, you can see that what we have imposed here conceptually is the following. That when I'm away from the critical point, I look at the correlations of this important statistical field. And I find that they fall off with separation, according to some power. And the reason is that at the critical point, the correlation length has gone to infinity. That's not the length scale that you have to play with. You can divide x minus y divided by c, which is what we do away from the critical point. c has gone to infinity. The other length scale that we are worried about are things that go into the microscopics. but we are assuming that microscopics is irrelevant. It has been washed out. So if we don't have a large length scale, if we don't have a short length scale, some function of distance, how can it decay? The only way it can decay is [INAUDIBLE]. So this statement is that when we are at a critical point, I look at some correlation. And this was the magnetization correlation. But I can look at correlation of anything else as a function of separation. And this will only fall off as some power of separation. Another way of writing it is that if I were to multiply this by some length scale, so rather than looking at things that are some distance apart at twice that distance apart or hundred times that distance apart, I will reproduce the correlation that I have up to some other of the scale factor. So the scale factor here we can read off has to be related to t minus 2 plus eta. But essentially, this is a statement again about homogeneity of correlation functions when you are at a critical point. So this is a symmetry here. It says you take your statistical correlations and you look at them at the larger scale or at the shorter scale. And up to some overall scale factor, you reproduce what you had before. So this is something to do with invariance on the scale. This scaling variance is some property that was popular a while ago as being associated with the kind of geometrical objects that you would call fractals. So the statement is that if I go across my system and there is some pattern of magnetization fluctuations, let's say I look at it. I'm going along this direction x. And I plot at some particular configuration that is dominant and is contributing to my free energy, the magnetization, that it has a shape that has this characteristic self similarity kind of maybe looking like a mountain landscape. And the statement is that if I were to take a part of that landscape and then blow it up, I will generate a pattern that is of course not the same as the first one. It is not exactly scale invariant. But it has the same kind of statistics as the one that I had originally after I multiplied this axis by some factor lambda. Yes? AUDIENCE: Under what length scales are those subsimilarity properties evident and how do they compare to the length scale over which you're doing your course grading for this field? PROFESSOR: OK, so basically we expect this to be applicable presumably at length scales that are less than the size of your system because once I get to the size of the system I can't blow it up further or whatever. It has to certainly be larger than whatever the coarse-graining length is, or the length scale at which I have confidence that I have washed out the microscoping details. Now that depends on the system in question, so I can't really give you an answer for that. The answer will depend on the system. But the point is that I'm looking in the vicinity of a point where mathematically I'm assured that there's a correlation length that goes to infinity. So maybe there is some system number 1 that average out very easily, and after a distance of 10 I can start applying this. But maybe there's some other system where the microscopic degrees of freedom are very problematic and I have to go further and further out before they average out. But in principle, since my c has gone to infinity, I can just pick a bigger and bigger piece of my system until that has happened. So I can't tell you what the short distance length scale is in the same sense that when [INAUDIBLE] says that coast of Britain is fractal, well, I can't tell you whether the short distance is the size of a sand particle, or is it the size of, I don't know, a tree or something like that. I don't know. So we started thinking about our original problem. And constructing this Landau-Ginzburg, [INAUDIBLE] that we worked with on the basis of symmetries such as invariance on the rotation, et cetera. Somehow we've discovered that the point that we are interested has an additional symmetry that maybe we didn't anticipate, which is this self-similarity and scale invariance. So you say, OK, that's the solution to the problem. Let's go back to our construction of the Landau-Ginsburg theory and add to the list of symmetries that have to be obeyed, this additional self-similarity of scaling. And that will put us at t equals to 0, h equals to 0. And for example, we should be able to calculate this correlation. Let me expand a little bit on that because we will need one other correlation. Because we've said that essentially, all of the properties of the system I can get from two independent exponents. So suppose I constructed this scale invariant theory and I calculated this. That would be on exponent. I need another one. Well, we had here a statement about alpha. We made the statement that heat capacity diverges. Now in the same sense that the susceptibility is a response-- it came from two derivatives of the free energy with respect to the field. The derivative of magnetization with respect to field magnetization is one derivative [INAUDIBLE]. The heat capacity is also two derivatives of free energy with respect to some other variable. So in the same sense that there is a relationship between the susceptibility and an integrated correlation function, there is a relationship that says that the heat capacity is related to an integrated correlation function. So c as a function of say t and h, let's say the singular part, is going to be related to an integral of something. And again, we've already seen this. Essentially, you take one derivative of the free energy let's say with respect the beta or temperature, you get the energy. And you take another derivative of the energy you will get the heat capacity. And then that derivative, if we write in terms of the first derivative of the partition function becomes converted to the variance in energy. So in the same way that the susceptibility was the variance of the net magnetization, the heat capacity is related to the variance of the net energy of the system at an even temperature. The net energy of the system we can write this as an integral of an energy density, just as we wrote the magnetization as an integral of magnetization density. And then the heat capacity will be related to the correlation functions of the energy density. Now once more, you say that I'm at the critical point. At the critical point there is no length scale. So any correlation function, not only that of the magnetization, should fall off as some power of separation. And you can call that exponent whatever you like. There is no definition for it in the literature. Let me write it in the same way as magnetization as d minus 2 plus eta prime. So then when I go and say let's terminate it at the correlation length, the answer is going to be proportional to c to the 2 minus eta prime, which would be t to the minus mu. 2 minus eta prime. So then I would have alpha being mu 2 minus eta. So all I need to do in principle is to construct a theory, which in addition to rotational invariance or there's whatever is appropriate to the system in question, has this statistical scale invariance. Within that theory, calculate the correlation functions of two quantities, such as magnetization and energy. Extract two exponents. Once we have two exponents, then we know why your manipulations will be able to calculate all the exponents. So why doesn't this solve the problem? The answer is that whereas I can write immediately for you a term such as m squared, that is rotational invariant, I don't know how to write down a theory that is scale invariant. The one case where people have succeeded to do that is actually two dimensions. So in two dimensions, one can show that this kind of scale invariance is related to conformal invariance and that one can explicitly write down conformal invariant theories, extract exponents et cetera out of those. But say in three dimensions, we don't know how to do that. So we will still, with that concept in the back of our mind, approach it slightly differently by looking at the effects of the scale transformation on the system. And that's the beginning of this concept of normalization. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 26_Continuous_Spins_at_Low_Temperatures_Part_6.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu MEHRAN KARDAR: OK, let's start. So in this class, we focused mostly on having some slab of material and having some configuration of some kind of a field inside. And we said that, basically, we are going to be interested close to, let's say, phase transition and some quantity that changes at the phase transition. We are interested in figuring out the singularities associated with that. And we can coarse grain. Once we have coarse grained, we have the field m, potentially a vector, that is characterized throughout this material. So it's a field that's function of x. And by integrating out over a lot of degrees of freedom, we can focus on the probability of finding different configurations of this field. And this probability we constructed on the basis of a number of simple assumptions such as locality, which implied that we would write this probability as a product of contributions of different parts, which in the exponent becomes an integral. And then we would put within this all kinds of things that are consistent with the symmetries of the problem. So if, for example, this is a field that is invariant on the rotations, we would be having terms such as m squared, m to the fourth, and so forth. But the interesting thing was that, of course, there is some interaction with the neighborhoods. Those neighborhood interactions we can, in the continuum limit, inclement by putting terms that are proportional to radiant of m and so forth. So there is a lot of things that you could put consistent with symmetry and presumably be as general as possible. You could have this and the terms in this coefficients of this expansion would be these phenomenological parameters characterizing this probability, function of all kinds of microscopic degrees of freedom, as well as microscopic constraints such as temperature, pressures, et cetera. Now the question that we have is if I start with a system, let's say a configuration where everybody's pointing up or some other configuration that is not the equilibrium configuration, how does the probability evolve to become something like this? Now we are interested therefore in m that is a function of position and time. And since I want to use t for time, this coefficient of m squared that we were previously calling t, I will indicate by r, OK? There are various types of dynamics that you can look at for this problem. I will look at the class that is dissipative. And its inspiration is the Brownian motion that we discussed last time where we saw that when you put a particle in a fluid to a very good approximation when the fluid is viscous, it is the velocity that is proportional to the force and you can't ignore inertial effects such as mass times acceleration and you write an equation that is linear in position. Velocity is the linear derivative of position, which is the variable that is of interest to you. So here the variable that is of interest to us, is this magnetization that is changing as a function of time. And the equation that we write down is the derivative of this field with respect to time. And again, for the Brownian motion, the velocity was proportional to the force. The constant of proportionality was some kind of a mobility that you would have in the fluid. So continuing with that inspiration, let's put some kind of a mobility here. I should really put a vector field here, but just for convenience, let's just focus on one component and see what happens. Now what is the force? Presumably, each location individually feels some kind of a force. And typically when we had the Brownian particle, the force we were getting from the derivative of the potential with respect to the [? aviation ?] of the position, the field that we are interested. So the analog of our potential is the energy that we have over here, and in the same sense that we want this effective potential, if you like, to govern the equilibrium behavior-- and again, recall that for the case of the Brownian particle, eventually the probability was related to the potential by e to the minus beta v. This is the analog of beta v, now considered in the entire field. So the analog of the force is a derivative of this beta H, so this would be our beta H with respect to the variable that I'm trying to change. And since I don't have just one variable, but a field, the analog of the derivative becomes this functional derivative. And in the same sense that the Brownian particle, the Brownian motion, is trying to pull you towards the minimum of the potential, this is an equation that, if I kind of stop over here, tries to put the particle towards the minimum of this beta H. Now the reason that the Brownian particle didn't go and stick at one position, which was the minimum, but fluctuated, was of course, that we added the random force. So there is some kind of an analog of a random force that we can put either in front of [? mu ?] or imagine that if we added it to [? mu, ?] and we put it over here. Now for the case of the Brownian particle, the assumption was that if I evaluate eta at some time, and eta at another time t2 and t1, that this was related to 2D delta function t1 minus t2. Now of course here, at each location, I have a noise term. So this noise carries an index, which indicates the position, which is, of course, a vector in D dimension in principal. And there is no reason to imagine that the noise here that comes from all kinds of microscopic degrees of freedom that we have integrated out should have correlation with the noise at some other point. So the simplest assumption is to also put the delta function in the positions, OK? So if I take this beta H that I have over there and take the functional derivative, what do I get? I will get that the m of xt by dt is the function of-- oops, and I forgot to put a the minus sign here. The force is minus the potential, and this is basically going down the gradient, so I need to put that. So I have to take a derivative of this. First of all I can take a derivative with respect to m itself. I will get minus rm. Well, actually, let's put the minus out front. And then I will get the derivative of m to the fourth, which is [? 4umq, ?] all kinds of terms like this. Then the terms that come from the derivatives and taking the derivative with respect to gradient will give me K times the gradient. But then in the functional derivative, I have to take another derivative, converting this to minus K log Gaussian of m. And then I would have L the fourth derivative from the next term and so forth. And on top of everything else, I will have the noise who's statistics I have indicated above. So this entity, this equation, came from the Landau-Ginzburg model, and it is called a time dependent Landau-Ginzburg equation. And so would be the analog of the Brownian type of equation, now for an entire field. Now we're going to have a lot of difficulty in the short amount of time that is left to us to deal with this nonlinear equation, so we are going to do the same thing that we did in order to capture the properties of the Landau-Ginzburg model qualitatively. Which is to ignore nonlinearities, so it's a kind of Gaussian version of the model. So what we did was we linearized. AUDIENCE: Question. MEHRAN KARDAR: Yes? AUDIENCE: Why does the sign in front of the K term change relative to the others? MEHRAN KARDAR: OK, so when you have the function 0 of a field gradient, et cetera, you can show that the functional derivative, the first term is the ordinary type of derivative. And then if you think about the variations that are carrying m and with the gradient and you write it as m plus delta m and then make sure that you take the delta m outside, you need to do an integration by part that changes the sign. And the next term would be the gradient of the phi by the [? grad ?] m, and the next term would be log [INAUDIBLE] of [INAUDIBLE] gradient of m squared and so forth. It alternates and so by explicitly calculating the difference of two functionals with this integration evaluated at m and m plus delta m and pulling out everything that is proportional to delta m you can prove these expressions. So we linearize this equation, which I've already done that. I cross out the mq term and any other nonlinear terms, so I only keep the linear term. And then I do a Fourier transform. So basically I switch from the position representation to the Fourier transform that's called m tilde of q, I think. x is replaced with q. And then the equation for the field m linears. The form separates out into independent equations for the components that are characterized by q. So the Fourier transform of the left hand side is just this. Fourier transform of the right hand side will give me minus mu r plus K q squared. The derivative of [INAUDIBLE] will give me a minus q squared. The derivative of the next term L q to the forth, et cetera. And I've only kept the linear term, so I have m tilde of q and t. And then I have the Fourier transforms of the noise eta of x and t. They [? call it ?] eta tilde of q [? m. ?] So you can see that each mode satisfy a separate linear equation. So this equation is actually very easy to solve for any linear equation m tilde of q and t. If I didn't have the noise, so I would start with some value at t [? close ?] to 0, and that value would decay exponentially in the characteristic time that I will call tao of q. And 1 over tao of q is simply this mu r plus K q2 and so forth. Now once you have noise, essentially each one of these noises acts like an initial condition. And so the full answer is an integral over all of these noises from 0 tilde time t of interest. Dt prime the noise that occurs at a time t prime, and the noise that occurs at time t [? trime ?] relaxes with this into the minus t minus t prime tao of q. So that's the solution to that linear noisy equations, basically [? sequencing ?] [INAUDIBLE]. So one of the things that we now see is that essentially the different Fourier components of the field. Each one of them is independently relaxing to something, and each one of them has a characteristic relaxation time. As I go towards smaller and smaller values of q, this rate becomes smaller, and the relaxation time becomes larger. So essentially shortwave wavelength modes that correspond to large q, they relax first. Longer wavelength modes will relax later on. And you can see that the largest relaxation time, tao max, corresponds to q equals to 0 is simply 1 over [? nu ?] r. So I can pluck this to a max as a function of r, and again in this theory, the Gaussian theory, we saw only makes sense as long as all is positive. So I have to look only on the positive axis. And I find that the relaxation time for the entire system for the longest wavelength actually diverges as r goes to 0. And recall that r, in our perspective, is really something that is proportional to T minus Tc. So basically we find that as we are approaching the critical point, the time it takes for the entirety of the system or the longest wavelength [? modes ?] to relax diverges as 1 over T minus Tc. There's an exponent that shows this, so called, critical slowing down. Yes? AUDIENCE: In principal, why couldn't you have that r becomes negative if you restrict your range of q to be outside of some value and not go arbitrarily close to the origin. MEHRAN KARDAR: You could, but what's the physics of that? See. AUDIENCE: I'm wondering if there is or is that not needed. MEHRAN KARDAR: No. OK, so the physics of that could be that you have a system that has some finite size, [? l. ?] Then the largest q that you could have [? or ?] the smallest q that you could have would be the [? 1/l. ?] So in principal, for that you can go slightly negative. You still cannot go too negative because ultimately this will overcome that. But again, we are interested in singularities that we kind of know arise in the limit of [INAUDIBLE]. I also recall, there is a time that we see as diverging as r goes to 0. Of course, we identified before a correlation length from balancing these two terms, and the correlation length is square root of K over r, which is again proportional to T minus Tc and diverges with a square root singularity. So we can see that this tao max is actually related to the psi squared over Nu K. And it is also related towards this. We can see that our tao of q, basically, if q is large such that qc is larger than 1 and the characteristic time is going to be 1 over [? nu ?] K times the inverse of q squared. And the inverse of q is something like a wavelength. Whereas, ultimately, this saturates for qc what is less than 1 [? long ?] wavelengths to c squared over [? nu ?] K. So basically you see things at very short range, at length scales that are much less than the correlation length of the system, that the characteristic time will depend on the length scale that you are looking at squared. Now you have seen times scaling as length squared from diffusion, so essentially this is some kind of a manifestation of diffusion, but as you perturb the system, let's say at short distances, there's some equilibrium system. Let's say we do some perturbation to it at some point, and that perturbation will start to expand diffusively until it reaches the size of the correlation length, at which point it stops because essentially correlation length is an individual block that doesn't know about individual blocks, so the influence does not last. So quite generally, what you find-- so we solved the linearized version of the Landau-Ginzburg model, but we know that, say, the critical behaviors for the divergence of the correlation length that is predicted here is not correct in three dimensions, things get modified. So these kind of exponents that come from diffusion also gets modified. And quite generally, you find that the relaxation time of a mode of wavelength q is going to behave something like wavelength, which is 1 over q, rather than squared as some exponent z. And then there is some function of the product of the wavelength you are looking at and the correlation length so that you will cross over from one behavior to another behavior as you are looking at length scales that are smaller than the correlation length or larger than the correlation length. And to get what this exponent z is, you have to do study of the nonlinear model in the same sense that, in order to get the correction to the exponent [? nu ?], we had to do epsilon expansion. You have to do something similar, and you'll fine that at higher changes, it goes like some correction that does not actually start at order of epsilon but at order of epsilon squared. But there's essentially some modification of the qualitative behavior that we can ascribe to the fusion of independent modes exists quite generally and universal exponents different from [? to ?] will emerge from that. Now it turns out that this is not the end of the story because we have seen that the same probability distribution can describe a lot of different systems. Let's say the focus on the case of n equals to 1. So then this Landau-Ginzburg that I described for you can describe, let's say, the Ising model, which describes magnetizations that lie along the particular direction. So that it can also describe liquid gas phenomena where the order parameter is the difference in density, if you like, between the liquid and the gas. Yet another example that it describes is the mixing of an alloy. So let's, for example, imagine brass that has a composition x that goes between 0 and 1. On one end, let's say you have entirely copper and on the other and you have entirely zinc. And so this is how you make brass as an alloy. And what my other axis is is the temperature. What you find is that there is some kind of phase diagram such that you get a nice mixture of copper and zinc only if you are at high temperatures, whereas if you are at low temperature, you basically will separate into chunks that are rich copper and chunks that are rich in zinc. And you'll have a critical demixing point, which has exactly the same properties as the Ising model. For example, this curve will be characterized with an exponent beta, which would be the beta of the Ising mode. And in particular, if I were to take someplace in the vicinity of this and try to write down a probability distribution, that probability distribution would be exactly what I have over there where m is, let's say, the difference between the two types of alloys that I have compared to each other over here. So this is related to 2x minus 1 or something like that. So as you go across your piece of material close to here, there will be compositional variations that are described by that. So the question is, I know exactly what the probability distribution is for this system to be an equilibrium given this choice of m. Again, with some set of parameters, R, U, et cetera. The question is-- is the dynamics again described by the same equation? And the answer is no. The same probability of distribution can describe-- or can be obtained with very different dynamics. And in particular, what is happening in the system is that, if I integrate this quantity m across the system, I will get the total number of 1 minus the other, which is what is given to you, and it does not change as a function of time. d by dt of this quantity is 0. It cannot change. OK. Whereas the equation that I have written over here, in principle, locally, I can, by adding the noise or by bringing things from the neighborhood, I can change the value of m. I cannot do that. So this process of the relaxation that would go on in data graphs cannot be described by the time dependent on the Landau-Ginzburg equation because you have this conservation here. OK. So what should we do? Well, when things are conserved, like, say, as the gas particles move in this fluid, and if I'm interested in the number of particles in some cube, then the change in the number of particles in some cube in this room is related to the gradient of the current that goes into that place. So the appropriate way of writing an equation that describes, let's say, the magnetization changing as a function of time. Given that you have a conservation, though, is to write it as minus the gradient of some kind of a current. So this j is some kind of a current, and these would be vectors. This is a current of the particles moving into the system. Now, in systems that are dissipative, currents are related to the gradient of some density through the diffusion constant, et cetera. So it kind of makes sense to imagine that this current is the gradient of something that is trying-- or more precisely, minus the gradient of something that tries to bring the system to be, more or less, in its equilibrium state. Equilibrium state, as we said, is determined by this data H, and we want to push it in that direction. So we put our data H by dm over here, and we put some kind of a U over here. Of course, I would have to add some kind of a conserved random current also, which is the analog of this non-conserved noise that I add over initially. OK. Now, the conservative version of the equation, you can see, you have two more derivatives with respect to what we had before. And so once I-- OK. So if we do something like this, dm by dt is mu C. And then I would have [INAUDIBLE] plus of R, rather than by itself. Actually, it would be the plus then of something like R m plus 4 U m cubed, and so forth. And we are going to ignore this kind of term. And then there would be high order terms that would show up minus k the fourth derivative, and so forth. And then there may be some kind of a conserved noise that I have to put outside. OK. So when I fully transform this equation, what do I get? I will get that dm by dt is-- let's say in the full space, until there is a function of qnt is minus U C. Because of this plus, then there's an additional factor of q squared. And then I have R plus kq squared plus L cubed to the fourth, et cetera. And then I will have a fully transformed version of this conserved noise. OK. You can see that the difference between this equation and the previous equation is that all of the relaxation times will have an additional factor of q squared. And so eventually, this shortest relaxation time actually will glow like the size of the system squared. Whereas previously, it was saturated at the correlation length. And because you will have this conservation, though, you have to rearrange a lot of particles keeping their numbers constant. You have a much harder time of relaxing the system. All of the relaxation times, as we see, grow correspondingly and become higher. OK. So indeed, for this class, one can show that z starts with 4, and then there will be corrections that would modify that. So the-- yes? AUDIENCE: How do we define or how do we do a realization of the conserved noise, conserved current-- conserved noise in the room? MEHRAN KARDAR: OK. AUDIENCE: So it has some kind of like correlation-- self-correlation properties, I suppose, because, if current flowing out of some region, doesn't it want to go in? MEHRAN KARDAR: If I go back here, I have a good idea of what is happening because all I need, in order to ensure conservation, is that the m by dt is the gradient of something. AUDIENCE: OK. MEHRAN KARDAR: So I can put whatever I want over here. AUDIENCE: So if it's a scalar or an [INAUDIBLE] field? MEHRAN KARDAR: Yes. As long as it is sitting under the gradient-- AUDIENCE: OK. MEHRAN KARDAR: --it will be OK, which means this quantity here that I'm calling a to z has a gradient in it. And if you wait for about five minutes, we'll show that, because of that in full space, rather than having-- well, I'll describe the difference between non-conserved and conserved noise in fully space. It's much easier. OK. So actually, as far as what I have discussed so far, which is relaxation, I don't really need the noise because I can forget the noise. And all I have said-- and I forgot the n tilde-- is that I have a linear equation that relaxes your variable to 0. I can immediately read off for the correlation, then this-- a correlation times what I need the noise for it so that, ultimately, I don't go to the medium is the potential, but I go to this pro-rated distribution. So let's see what we have to do in order to achieve that. For simplicity, let's take this equation. Although I can take the corresponding one for that. And let's calculate-- because of the presence of this noise, if I run the same system at different times, I will have different realizations of the noise than if I had run many versions of the system because of the realizations of noise. It's quantity and tilde would be different. It would satisfy some kind of a pro-rated distribution. So what I want to do is to calculate averages, such as the average of m tilde. Let's say, q1 at time T with m tilde q2 at time t. And you can see already from this equation that, if I forget the part that comes from the noise, whatever initial condition that I have will eventually decay to 0. So the thing that agitates and gives some kind of a randomness to this really comes from this. So let's imagine that we have looked at times that are sufficiently long so that the influence of the initial condition has died down. I don't want to write the other term. I could do it, but it's kind of boring to include it. So let's forget that and focus on the integral. 0 to t. Now, if I multiply two of these quantities, I will have two integrals over t prime. All right. Each one of them would decay with the corresponding tau of q. In one case, tau of q1. In the other case, tau of q2. Coming from these things. And the noise, q1 at time to 1 prime, and noise q2 at time [? t2 prime. ?] OK. Now if I average over the noise, then I have to do an average over here. OK. Now, one thing that I forgot to mention right at the beginning is that, of course, the average of this we are going to set to 0. It's the very least that is important. Right? So if I do that, clearly, the average of one of these in full space would be 0 also because the full q is related to the real space delta just by an integral. So if the average of the integral is 0, the average of this is 0. So it turns out that, when you look at the average of two of them-- and it's a very simple exercise to just rewrite these things in terms of 8 of x and t. 8 of x and t applied average that you have. And we find that the things that are uncorrelated in real space also are uncoordinated in full space. And so the variance of this quantity is 2d delta 1 prime minus [? d2 prime. ?] And then you have the analog of the function in full space, which always carries the solution of a factor of 2 pi to the d. And it becomes a sum of the q's, as we've seen many times before. OK. So because of this delta function, this delta integral becomes one integral. So I have the integral 0 to t. The 2t prime I can write as just 1t prime, and these two factors merge into one. It comes into minus t minus t prime over tau of q, except that I get multiplied by a factor 2 since I have two of them. And outside of the integral, I will have this factor of 2d, and then 2 pi to the d delta function with 1 plus 2. OK. Now, we are really interested-- and I already kind of hinted at that in the limit where time becomes very large. In the limit, where time becomes a very large, essentially, I need to calculate the limit of this integral as time becomes very large. And as time grows to very large, this is just the integral from 0 to infinity. And the integral is going to give me 2 over tau of q. Essentially, you can see that integrating at the upper end will give me just 0. Integrating at the smaller end, it will be exponentially small as it goes to infinity as a factor of 2 over tau. OK. Yes? AUDIENCE: Are you assuming or-- yeah. Are you assuming that tau of q is even in q to be able to combine to-- MEHRAN KARDAR: Yes. AUDIENCE: --2 tau? So-- OK. MEHRAN KARDAR: I'm thinking of the tau of q's that we've calculated over here. AUDIENCE: OK. MEHRAN KARDAR: OK. Yes? AUDIENCE: The e times the [INAUDIBLE] do the equation of something? Right? MEHRAN KARDAR: At this stage, I am focusing on this expression over here, where a is not. But I will come to that expression also. So for the time being, this d is just the same constant as we have over here. OK? So you can see that the final answer is going to be d over tau of q 2 pi to the d delta function q1 plus q2. And if I use the value of tau of q that I have, tau of q is mu R plus kq squared, which becomes d over mu R plus kq squared and so forth. 2 pi to the d delta function q1 plus q2. So essentially, if I take this linearized time dependent line of Landau-Ginzburg equation, run it for a very long time, and look at the correlations of the field, I see that the correlations of the field, at the limit of long times, satisfy this expression. Now, what do I know if I look at the top line that I have for the probability of distribution? I can go and express that probability of distribution in full mode. In the linear version, I immediately get that the probability of m tilde of q is proportional to a product over different q's, e to the minus R plus kq squared, et cetera, and tilde of q squared over 2. All right. So when I look at the equilibrium linear as Landau-Ginzburg, I can see that, if I calculate the average of m of q1, m of q2, then this is an equilibrium average. What I would get is 2 pi to the d delta function of q1 plus q2 because, clearly, the different q's are recovered from each other. And for the particular value of q, what I will get is 1 over R plus kq squared and so forth. Yes? AUDIENCE: So isn't the 2 over tau-- [INAUDIBLE]? MEHRAN KARDAR: Yes. Over 2. Yeah. OK. OK. I changed because I had 2d cancels the 2. I would have to put here tau of q over 2. I had 2d times tau of q over 2. So it's d tau. And inverse of tau, I have everything correct. OK. So if you make an even number of errors, the answer comes up. OK. But you can now compare this expression that comes from equilibrium and this expression that comes from the long time limit of this noisy equation. OK So we want to choose our noise so that the stochastic dynamics gives the same value as equilibrium, just like we did for the case of a Brownian particular where you have some kind of an Einstein equation that was relating the strength of the noise and the mobility. And we see that here all I need to do is to ensure that d over mu should be equal to 1. OK. Now, the thing is that, if I am doing this, I, in principle, can have a different noise for each q, and compensate by different mobility for each q. And I would get the same answer. So in the non-conserved version of this time dependent dynamics that you wrote down, the d was a constant and the mu was a constant. Whereas, if you want to get the same equilibrium result out of the conserved dynamics, you can see that, essentially, what we previously had as mu became something that is proportional to q squared. So essentially, here, this becomes mu C q squared. So clearly, in order to get the same answer, I have to put my noise to be proportional to q squared also. And we can see that this kind of conserved noise that I put over here achieves that because, as I said, this conserved noise is the gradient of something, which means that, when I go to fully space, if it be q, it will be proportional to q. And when I take its variants, it's variants will be proportional to q squared. Anything precisely canceled. But you can see that you also-- this had a physical explanation in terms of a conservation law. In principle, you can cook up all kinds of b of q and mu of q. As long as this equality is satisfied, you will have, for these linear stochastic equations, the guarantee that you would always get the same equilibrium result. Because if you wait for this dynamics to settle down after long times, you will get to the answer. Yes? AUDIENCE: I wonder how general is this result for stochastic [INAUDIBLE]? MEHRAN KARDAR: OK. AUDIENCE: But what I-- MEHRAN KARDAR: So what I showed you was with for linearized version, and the only thing that I calculated was the variance. And I showed that the variances were the same. And if I have a Gaussian problem of distribution, the variance is completely categorizable with distribution. So this is safe. But we are truly interested in the more general non-Gaussian probability of distribution. So the question really is-- if I keep the full non-linearity in this story, would I be able to show that the probability of distribution that will be characterized by all kinds of moments eventually has the same behavior as that. AUDIENCE: Mm-hmm. MEHRAN KARDAR: And the answer is, in fact, yes. There's a procedure that relies on converting this equation-- sorry. One equation that governs the evolution of the full probability as a function of time. Right? So basically, I can start with a an initial probability and see how this probability evolves as a function of time. And this is sometimes called a master equation. Sometimes, called a [INAUDIBLE] equation. And we did cover, in fact, this in-- next spring in the statistical physics and biology, we spent some time talking about these things. So you can come back to the third version of this class. And one can ensure that, with appropriate choice of the noise, the asymptotic solution for this probability distribution is whatever Landau-Ginzburg or other probability distribution that you need most. AUDIENCE: So is this true if we assume Landau-Ginzburg potential for how resistant? MEHRAN KARDAR: Yes. AUDIENCE: OK. Maybe this is not a very good-stated question, but is there kind of like an even more general level? MEHRAN KARDAR: I'll come to that, sure. But currently, the way that I set up the problem was that we know some complicated equilibrium force that exist-- form of the probability that exist. And these kinds of linear-- these kinds of stochastic linear or non-linear evolution equations-- generally called non- [INAUDIBLE] equations-- one can show that, with the appropriate choice of the noise, we'll be able to asymptotically reproduce the probability of distribution that we knew. But now, the question is, of course, you don't know the probability of distribution. And I'll say a few words about that. AUDIENCE: OK. Thank you. MEHRAN KARDAR: Anything else? OK. So the lesson of this part is that the field of dynamic or critical phenomena is quite rich, much richer than the corresponding equilibrium critical phenomena because the same equilibrium state can be obtained by various different types of dynamics. And I explained to you just one conservation law, but there could be some combination of conservation of energy, conservation of something. So there is a whole listing of different universality classes that people have targeted for the dynamics. But not all of this was assuming that you know what the ultimate answer is because, in all cases, the equations that we're writing are dependent on some kind of a gradient descent conserved or non-conserved around something that corresponded to the log of probability of distribution that we eventually want to put. And maybe you don't know that, and so let me give you a particular example in the context of, let's say, surface interface fluctuations. Starting from things that you know and then building to something that maybe you don't. Let's first with the case of a soap bubble. So we take some kind of a circle or whatever, and we put a soap bubble on top of it. And in this case, the energy of the formation-- the cost of the formation comes from surface tension. And let's say, the cost of the deformation is the changing area times some sigma. So I neglect the contribution that comes from the flat surface, and see if I make a deformation. If I make a deformation, I have changed the area of the spin. So there is a cost that is proportion to the surface tension times the change in area. Change in area locally is the square root of 1 plus the gradient of a height profile. So what I can do is I can define at each point on the surface how much it has changed its height from being perfectly flat. So h equals to 0 is flat. Local area is the integral dx dy of square root of 1 plus gradient of h squared, minus 1. That corresponds to the flat. And so then you expand that. The first term is going to be the integral gradient of h squared. So this is the analog of what we had over there, only the first term. So you would say that the equation that you would write down for this would be all to some constant mu proportional to the variations of this, which will give me something like sigma Laplacian of h. But because of the particles from the air constantly bombarding the surface, there will be some noise that depends on where you are on the surface in time. And this is the non-conserved version. And you can from this very quickly get that the expectation value of h tilde of q squared is going to be proportional to something like D over mu sigma q squared, because of this q squared. And if you ask how much fluctuations you have in real space-- so that typical scale of the fluctuations in real space-- will come from integrating 1 over q squared. And it's going to be our usual things that have this logarithmic dependence, so there will be something that ultimately will go logarithmically with the size of the system. The constant of proportionality will be proportional to kt over sigma. So you have to choose your D and mu to correspond to this. But basically, a soap film, as an example of all kinds of Goldstone mode-like things that we have seen. It's a 2-dimensional entity. We will have logarithmic fluctuations-- not very big, but ultimately, at large enough distances, it will have fluctuations. So that was non-conserved. I can imagine that, rather than this, I have the case of a surface of a pool. So here I have some depth of water, and then there's the surface of the pool of water. And the difference between this case and the previous case-- both of them can be described by a height function. The difference is that if I ignore evaporation and condensation, the total mass of water is going to be conserved. So I would need to have divided t of the integral dx d qx h of x and t to be 0. So this would go into the conserved variety. And while, if I create a ripple on the surface of this compared to the surface of that, the relaxation time through this dissipative dynamics would be much longer in this case as opposed to that case. Ultimately, if I wait sufficiently long time, both of them would have exactly the same fluctuations. That is, you would go logarithmically with the length scale over which [INAUDIBLE]. OK, so now let's look at another system that fluctuates. And I don't know what the final answer is. That was the question, maybe, that you asked. The example that I will give is the following-- so suppose that you have a surface. And then you have a rain of sticky materials that falls down on top of it. So this material will come down. You'll have something like this. And then as time goes on, there will be more material that will come, more material that will come, more material that will come. So there, because the particles are raining down randomly at different points, there will be a stochastic process that is going on. So you can try to characterize the system in terms of a height that changes as a function of t and as a function of position. And there could be all kinds of microscopic things going on, like maybe these are particles that are representing some kind of a deposition process. And then they come, they stick in a particular way. Maybe they can slide on the surface. We can imagine all kinds of microscopic degrees of freedom and things that we can put. But you say, well, can I change my perspective, and try to describe the system the same way that we did for the case of coarse grading and going from without the microscopic details to describe the phenomenological Landau-Ginzburg equation? And so you say, OK, there is a height that is growing. And what I will write down is an equation that is very similar to the equations that I had written before. Now I'm going to follow the same kind of reasoning that we did in the construction of this Landau-Ginzburg model, is we said that this weight is going to depend on all kinds of things that relate to this height that I don't quite know. So let's imagine that there is some kind of a function of the height itself. And potentially, just like we did over there, the gradient of the height, five derivatives of the height, et cetera. And then I will start to make an expansion of this in the same spirit that I did for the Landau-Ginzburg Model, except that when I was doing the Landau-Ginzburg Model, I was doing the expansion at the level of looking at the probability distribution and the log of the probability. Here I'm making the expansion at the level of an equation that governs the dynamics. Of course, in this particular system, that's not the end of story, because the change in height is also governed by this random addition of the particles. So there is some function that changes as a function of position and time, depending on whether, at that time, a particle was dropped down. I can always take the average of this to be 0, and put that average into the expansion of this starting from a constant. Basically, if I just have a single point and I randomly drop particles at that single point, there will be an average growth velocity, an average addition to the height, that averages over here. But there will be fluctuations that are going [INAUDIBLE]. OK but the constant is the first term in an expansion such as this. And you can start thinking, OK, what next order? Can I put something like alpha h? Potentially-- depends on your system-- but if the system is invariant whether you started from here or whether you started from there-- something like gravity, for example, is not important-- you say, OK, I cannot have any function of h if my dynamics will proceed exactly the same way if I were to translate this surface to some further up or further down. If I see that there's no change in future dynamics on average, then the dynamic cannot depend on this. OK, so we've got rid of that. And any function of h-- can I put something that is proportional to gradient of h? Maybe for something I can, but for h itself I cannot, because h is a scalar gradient. If h is a vector, I can't set something that is a scalar equal to a vector, so I can't have this. Yes? AUDIENCE: Couldn't you, in principle, make your constant term in front of the gradient also a vector and [INAUDIBLE]? MEHRAN KARDAR: You could. So there's a whole set of different systems that you can be thinking about. Right now, I want to focus on the simplest system, which is a scalar field, so that my equation can be as simple as possible, but still we will see it has sufficient complication. So you can see that if I don't have them, the next order term that I can have would be something like a Laplacian. So this kind of diffusion equation, you can see, has to emerge as a low-order expansion of something like this. And this is the ubiquity of the diffusion equation appearing all over the place. And then you could have terms that would be of the order of the fourth derivative, and so forth. There's nothing wrong with that. And then, if you think about it, you'll see that there is one interesting possibility that is not allowed for that system, but is allowed for this system, which is something that is a scalar. It's the gradient of h squared. Now I could not have added this term for the case of the soap bubble for the following reason-- that if I reverse the soap bubble so that h becomes minus h, the dynamics would proceed exactly as before. So the soap bubble has a symmetry of h going to minus h, and so that symmetry should be preserved in the equation. This term breaks that symmetry because the left-hand side is odd in h, whereas the right-hand side of this term would be even in h. But for the case of the growing surface-- and you've seen things that are growing. And typically, if I give you something that has grown like the tree trunk, for example, and if I take the picture of a part of it, and you don't see where the center is, where the end is, you can immediately tell from the way that the shape of this object is, that it is growing in some particular direction. So for growth systems, that symmetry does not exist. You are allowed to have this term, and so forth. Now the interesting thing about this term is that there is no beta h that you can write down that is local-- some function of h such that if you take a functional derivative with respect to h, it will reproduce that term-- just does not exist. So you can see that somehow immediately, as soon as we liberate ourselves from writing equations that came from functional derivative of something, but potentially have physical significance, we can write down new terms. So this is actually-- also, you can do this, even for two particles. A potential v of x1 and x2 will have some kind of derivatives. But if you write dynamical equations, there are dynamical equations that allow you to rotate 1 x from x1 to x2. That kind of term will never come from taking the derivative. So fine. So this is a candidate equation that is obtained in this context-- something that is grown. We say we are not interested in it's coming from some underlying weight, but presumably, this system still, if I look at it at long times, will have some kind of fluctuations. All the fluctuations of this growing surface, like the fluctuations of the soap bubble, and they have this logarithmic dependence. You have a question? AUDIENCE: So why doesn't that term-- what if I put in h times that term that we want to appear and then I vary with respect to h? A term like what we want to pop out together with other terms? MEHRAN KARDAR: Yeah, but those other terms, what do you want to do with them? AUDIENCE: Well, maybe they're not acceptable [INAUDIBLE]? MEHRAN KARDAR: So you're saying why not have a term that is h gradient of h squared? Functional derivative of that is gradient of h squared. And then you have a term that is h Laplacian-- it's a gradient of, sorry, gradient of h. And then you expand this. Among the terms that you would generate would be a term that is h, a Laplacian of h. This term violates this condition that we had over here. And you cannot separate this term from that term. So what you describe, you already see at the level over here. It violates translation of symmetry in [? nature ?]. And you can play around with other functions. You come to the same conclusion. OK, so the question is, well, you added some term here. If I look at this surface that has grown at large time, does it have the same fluctuations as we had before? So a simple way to ascertain that is to do the same kind of dimensional analysis which, for the Landau-Ginzburg, was a prelude to doing renormalization. So we did things like epsilon expansion, et cetera. But to calculate that there was a critical dimension of 4, all we needed to do was to rescale x and m, and we would immediately see that mu goes to mu, b to the 4 minus D or something-- D minus 4, for example. So we can do the same thing here. We can always move to a frame that is moving with the average velocity, so that we are focusing on the fluctuations. So we can basically ignore this term. I'm going to rescale x by a factor of b. I'm going to rescale time by a factor of b to something to the z. And this z is kind of indicative of what we've seen before-- that somehow in these dynamical phenomena, the scaling of time and space are related to some exponent. But there's also an exponent that characterizes how the fluctuations in h grow if I look at systems that are larger and larger. In particular, if I had solved that equation, rather than for a soap bubble in two dimensions, for a line-- for a string that I was pulling so that I had line tension-- the one-dimensional version of it, the one-dimensional version of an integral of 1 over q squared would be something that would grow with the size of the system. So there I would have a chi of 1/2, for example, in one dimension. So this is the general thing. And then I would say that the first equation, dhy dt, gets a factor of b to the chi minus z, because h scaled by a factor of chi, t scaled by a factor of z, the term sigma Laplacian of h gets a factor of b to the chi minus 2 from the two derivatives here-- sorry, the z and the 2 look kind of the same. This is a z. This is a 2. And then the term that is proportional to this non-linearity that I wrote down-- actually, it is very easy, maybe worthwhile, to show that sigma to the 4th goes with a factor of b to the chi minus 4. It is always down by a factor of two scalings in b with respect to a Laplacian-- the same reason that when we were doing the Landau-Ginzburg. We could terminate the series at order of gradient squared, because higher-order derivatives were irrelevant. They were scaling to 0. But this term grows like b to the 2 chi, because it's h squared, minus 2, because there's two gradients. Now thinking about the scaling of eta takes a little bit of thought, because what we have-- we said that the average of eta goes to 0. The average of eta at two different locations and two different times-- it is these particles that are raining down-- they're uncorrelated at different times. They're uncorrelated at different positions. There's some kind of variance here, but that's not important to us. If I rescale t by a factor of b, delta of bt-- sorry, if I scale t by a factor of b to the z, delta of b to the zx will get a factor of b to the minus z. This will get a factor of b to the minus d. But the noise, eta, is half of that. So what I will have is b to the minus z plus d over 2 times eta on the rescalings that I have indicated. I get rid of this term. So this-- divide by b to the chi minus z. So then this becomes bh y dt is sigma b to the z minus 2. Maybe I'll write it in red-- b to the z minus 2. This becomes sigma to the 4, b to the z minus 4. And then lambda over 2. This is Laplacian of h. This, for derivative of h, this is Laplacian of h. This term becomes b to the chi plus z minus 2, gradient of h squared. And the final term becomes b to the chi minus d minus z over 2. AUDIENCE: [INAUDIBLE]? MEHRAN KARDAR: b to the minus chi-- you're right. And then, actually, this I-- no? Minus chi minus d over 2 minus d over 2 [INAUDIBLE], That's fine. So I can make this equation to be invariant. So I want to find out what happens to this system if I find some kind of an equation, or some kind of behavior that is scale invariant. You can see that immediately, my choice for the first term has to be z equals to 2. So basically, it says that as long as you're governed by something that is diffusive, so that when you go to Fourier space, you have q squared, your relaxation times are going to have this diffusive character, where time is distance squared. Actually, you can see that immediately from the equation that this diffusion time goes like distance squared. So this is just a statement of that. Now, it is the noise that causes the fluctuations. And if I haven't made some simple error, you will find that the coefficient of the noise term becomes scale invariant, provided that I choose it to be z minus d over 2 for chi. And since my z was 2, I'm forced to have chi to be 2 minus d over 2. And let's see if it makes sense to us. So if I have a surface such as the case of the soap bubble in two dimensions, chi is 0. And 0 is actually this limiting case that would also be a logarithm. If I go to the case of d equals to 1-- like pulling a line and having the line fluctuate-- then I have 2 minus 1 over 2, which is 1/2, which means that, because of thermal fluctuations, this line will look like it a random walk. You go a distance x. The fluctuations in height will go like the square root of that. OK? So all of that is fine. You would have done exactly the same answer if you had just gotten a kind of scaling such as this for the case of the Gaussian Model without the nonlinearities. But for the Gaussian Model with nonlinearities, we could also then estimate whether the nonlinearity u is relevant. So here we see that the coefficient of our nonlinearity is lambda, is governed by something that is chi plus z minus 2. And our chi is 2 minus z over 2. z minus 2 is 0. So whether or not this nonlinearity is relevant, we can see depends on whether you're above or below two dimensions. So when you are below two dimensions, this nonlinearity is relevant. And you will certainly have different types of scaling phenomena then what you predict by the case of the diffusion equation plus noise. Of course, the interesting case is when you are at the marginal dimension of 2. Now, in terms of when you do proper renormalization group with this nonlinearity, you will find that, unlike the nonlinearity of the Landau-Ginzburg, which is marginally irrelevant in four dimensions. du by dl was minus u squared, this lambda is marginally relevant. d lambda by dl is proportional to plus lambda squared. It is relevant marginality-- marginally relevant. And actually, the epsilon expansion gives you no information about what's happening in the system. So people have then done numerical simulations. And they find that there is a roughness that is characterized by an exponent, say something like 0.4. So that when you look at some surface that is grown, is much, much rougher than the surface of a soap bubble or what's happening on the surface of the pond. And the key to all of this is that we wrote down equations on the basis of this generalization on symmetry that we had learned, now applied to this dynamical system, did an expansion, found one first term. And we found it to be relevant. And is actually not that often that you find something that is relevant, because then it is a reason to celebrate. Because most of the time, things are irrelevant, and you end up with boring diffusion equations. So find something that is relevant. And that's my last message to you. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 9_Perturbative_Renormalization_Group_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Let's start with our standard starting point of the last few lectures. That is, we are looking at some system. We tried to describe it by some kind of a field, statistical field after some averaging. That is, a function of position. And we are interested in calculating something like a partition function by integrating over all configurations of the statistical field. And these configurations have some kind of a weight. This weight we choose to write as exponential of some kind of a-- something like an effective Hamiltonian that depends on the configuration that you're looking at. And of course, the main thing faced with a problem that you haven't seen before is to decide on what the field is that you are looking at to average. And what kind of symmetries and constraints you want to construct in this form of the weight. In particular, we are sort of focusing on this Landau-Ginzburg model that describes phase transitions. And let's say in the absence of magnetic field, we are interested in a system that is rotationally symmetric. But the procedure is reasonably standard. Maybe in some cases you can solve a particular part of this. Let's call that part beta H0. In the context that we are working with, it's the Gaussian part that we have looked at before. So that's the integral over space. We used this idea of locality. We had an expansion in all things that are consistent with this. But for the purposes of the exactly solvable part, we focus on the Gaussian. So there is a term that is proportional to m squared, gradient of m squared and higher-order terms. So clearly for the time being, I'm ignoring the magnetic field. So let's say in this formulation the problem that we are interested is how our partition function depends on this coefficient, which where it goes to 0, the Gaussian weight becomes kind of unsustainable. Now, of course, we said that the full problem has, in addition to this beta H0, a part that involves the interaction. So what I have done is I have written the weight as beta H0 and a part that is an interaction. By interaction, I really mean something that is not solvable within the framework of Gaussian. In our case, what was non-solvable is essentially anything-- and there is infinity of terms-- that don't have second-order powers of m. So we wrote terms like m to the fourth, m to the sixth, and so forth. Now, the key to being able to solve this problem was to make a transformation to Fourier modes. So essentially, what we did was to write our m of x as a sum over Fourier modes. You could write it, let's say, in the discrete form as e to the i q dot x. And whether I write e to the i q dot x or minus i q dot x is not as important as long as I'm consistent within one session at least. And the normalization that I used was 1 over V. And the reason I used this normalization was that if I went to the continuum, I could write it nicely as an integral over q divided by the density of states. The V would disappear. e to the i q x m tilde of q. Now in particular if I do that transformation, the Gaussian part simply becomes 1 over V sum over q. Then the Fourier transform of this kernel t plus k q squared and so forth divided by 2 m of q discrete squared. Which if I go to the continuum limit simply becomes an integral over q t plus k q squared and so forth over 2 m of q squared. Now, once I have the Gaussian weight, from the Gaussian weight I can calculate various averages. And the averages are best described by noting that essentially after this transformation I can also write my weight as a product over the contributions of the different modes of something that is of this form, e to the minus beta H0. Now written in terms of these q modes, clearly it's a product of independent contributions. And then of course, there will be the u to be added later on. But when I have a product of independent contributions for each q, I can immediately see that if I look at, say, m evaluated for some q, m evaluated for some different q with the Gaussian weight. And when I calculate things with the Gaussian weight, I put this index 0. So that's my 0 to order or exactly solvable theory. And of course, we are dealing here with a vector. So these things have indices alpha and beta associated with them. And if I look at the discrete version, I have a product over Gaussians for each one of them. Clearly, I will get 0 unless I am looking at the same components. And I'm looking at the same q. And in particular, the constraint really is that q plus q prime should add up to 0. And if those constraints are satisfied, then I am looking at the particular term in this Gaussian. And the expectation value of m squared is simply the variance that we can see is V divided by t plus k q squared, q to the fourth, and so forth. And the thing is that most of the time we will actually be looking at things directly in the limit of the continuum where we replace sums q's with integrals over q. And then we have to replace this discrete delta function with a continuum delta function. And the procedure to do that is that this becomes delta alpha beta. This combination gets replaced by 2 pi to the d delta function q plus q prime, where this is now a direct delta function t plus k q squared plus l q to the fourth and so forth. And the justification for doing that is simply that the Kronecker delta is defined such that if I sum over, let's say, all q, the delta that is Kronecker, the answer would be 0. Now, if I go to the continuum limit, the sum over q I have to replace with integral over q with a density of states, which is V divided by 2 pi to the d. So if I want to replace this with a continuum delta function of q, I have to get rid of this 2 pi to the d over V. And that's what I have done. So basically, you replace-- hopefully I didn't make a mistake. Yes. So the discrete delta I replace with 1 over V. The V disappears and the 2 pi to the d appears. OK? Now, the thing that makes some difficulty is that whereas the rest of these things that we have not included as part of the Gaussian, because of the locality I could write reasonably simply in the space x. When I go to the space q, it becomes complicated. Because each m of x here I have to replace with a sum or an integral. And I have four of those m's. So here, let's say for the first term that involves u, I have in principle to go over an integral associated with conversion of the first m, conversion of the second m, third m. Each one of them will carry a factor of 2 pi to the d. So there will be three of them. And the reason I didn't write the fourth one is because after I do all of that transformation, I will have an integral over x of e to the i q1 dot x plus q2 dot x plus q3 dot x plus q4 dot x. So I have an integral of e to the i q dot x where q is the sum of the four of them over x. And that gives me a delta function that ensures the sum of the four q's have to be 0. So basically, one of the m's will carry now index q1. The other will carry index q2. The third will carry index q3. The fourth will carry index that is minus q1 minus q2 minus q3. Yes? AUDIENCE: Does it matter which indices for squaring it, or whatever? Sorry. Do you need a subscript for alpha and beta? PROFESSOR: OK. That's what I was [INAUDIBLE]. So this m to the fourth is really m squared m squared, where m squared is a vector that is squared. So I have to put the indices-- let's say alpha alpha-- that are summed over all possibility to get the dot product here. And I have to put the indices beta beta to have the dot products here. OK? Now when I go to the next term, clearly I will have a whole bunch more integrals and things like that. So the e u terms do not look as nice and clean. They were local in real space. But when I go to this Fourier space, they become non-local. q's that are all over this [INAUDIBLE] are going to be coupled to each other through this expression. And that's also why it is called interaction. Because in some sense, previously each q was a mode by itself and these terms give interactions between modes that have different q's. Yes? AUDIENCE: Is there a way to understand that physically of why you get coupling in Fourier space? [INAUDIBLE] higher than [INAUDIBLE]. PROFESSOR: OK. So essentially, we have a system that has translational symmetry. So when you have translational symmetry, this Fourier vector q is a good conserved quantity. It's like a momentum. So one thing that we have is, in some sense, a particle or an excitation that is going by itself with some particular momentum. But what these terms represent is the possibility that you have, let's say, two of these momenta coming and interacting with each other and getting two that are going out. Why is it [INAUDIBLE]? It's partly because of the symmetries that we built into the problem. If I had written something that was m cubed, I had the possibility of 2 going to, 1 or 1 going to 2, et cetera. All right. I forgot to say one more thing, which is that for the Gaussian theory, I can calculate essentially all expectation values most [INAUDIBLE] in this context of the Fourier representation. So this was an example of something that had two factors of m. But very soon, we will see that we would need terms that, let's say, involve m factors of n that are multiplied each other. So I have m alpha i of qi-- something like that. And again, 0 for the Gaussian expectation value. And if I have written things the way that I have-- that is, I have no magnetic field. So I have m2 minus m symmetry, clearly the answer is going to be 0 if l is odd. If l is even, we have this nice property of Gaussians that we described in 8.333, which is that this will be the sum over all pairs of averages. So something like m1, m2, m3, m4. You can have m1 m2 multiplying by m3 m4 average. m1 m3 average multiply m2 m4 average. m1 m4 average multiplied by m2 m3 average. And this is what's called a Wick's theorem, which is an important property of the Gaussian that we will use. So we know how to calculate averages of things of interest, which are essentially product of these factors of m, in the Gaussian theory. Now let's calculate these averages in perturbation theory. So quite generally, if I want to calculate the average of some O in a theory that involves averaging over some function l-- so this could be some trace, some completely unspecified things-- of a weight that is like, say, e to the minus beta H0. A part that I can do and a part that I want to treat as a small change to what I can do. The procedure of calculating the average is to multiply the probability by the quantity that I want to average. And of course, the whole thing has to be properly normalized. And this is the normalization, which is the partition function that we had previously. Now, the whole idea of perturbation is to assume that this quantity u is small. So we start to expand e to the minus u. I can certainly do that very easily, let's say, in the denominator. I have e to the minus beta H0, and then I have 1 minus u plus u squared over 2 minus u cubed over 6. Basically, the usual expansion of the exponential. In the numerator, I have the same thing, except that there is an I that is multiplying this expansion for the operator, or the object that I want to calculate the average. Now, the first term that I have in the denominator here, 1 multiplied by all integrals of e to the minus H0 is clearly what I would call the partition function, or the normalization that I would have for the Gaussian weight. So that's the first term. If I factor that out, then the next term is u integrated against the Gaussian weight and then properly normalized. So the next term will be the average of u with the Gaussian or whatever other 0 to order weight is that I can calculate things and I have indicated that by 0. And then I would have 1/2 average of u squared 0 and so forth. And the series in the numerator once I factor out the Z0 is pretty much the same except that every term will have an o. The Z 0's naturally I can cancel out. So what I have is from the numerator o minus ou plus 1/2 ou squared. What I will do with the denominator is to bring it in the numerator regarding all of these as a small quantity. So if I were to essentially write this expression raised to the minus 1 power, I can make a series expansion of all of these terms. So the first thing, if I just had one over 1 minus u in the denominator, it would come from 1 plus u plus u squared and u cubed, et cetera. But I've only kept thing to order of u squared. So when I then correct it because of this thing in the denominator, the 1/2 becomes minus 1/2 u squared 0. And then, there will be order of u cubed terms. So the answer is the product of two brackets. And I can reorganize that product, again, in powers of u. The lowest-order term is the 0 to order, or unperturbed average. And the first correction comes from ou average. And then I have the average of u average of u. You can see that something like a variance or connected correlation or cumulant appears because I have to subtract out the averages. And then the next order term, what will I have? I will write it as 1/2. I start with ou squared 0. Then I can multiply this with this, so I will get minus 2 o u0 u0. And then I can multiply o0 with those two terms. So I will have minus o0 u squared 0 plus 2 o0-- this is over here. u0 squared and higher-order terms. So basically, we can see that the coefficients, as I have written, are going to be essentially the coefficients that I would have if I were to expand the exponential. So things like minus 1 to the n over n factorial. And the leading term in all cases is o u raised to the n-th power 0, out of which are subtracted various things. And the effect of those subtractions, let's say we define a quantity which is likely cumulants that we were using in 8.333, which describe the subtractions that you would have to define such an average. So that's the general structure. What this really means-- and I sometimes call it cumulant or connected-- will become apparent very shortly. This is the general result. We have a particular case above, which is this Landau-Ginzburg theory perturbed around the Gaussian. So let's calculate the simplest one of our averages, this m alpha of q m beta of q prime, not at the Gaussian level, but as a perturbation. And actually, for practical reasons, I will just calculate the effect of the first term, which is u m to the fourth. So I will expand in powers of u. But once you see that, you would know how to do it for m to the sixth and all the higher powers. So according to what we have here, the first term is m alpha of q m beta of q prime evaluated with the Gaussian theory. The next term, this one, involves the average of this entity and the u. So our u I have written up there. So I have minus from the first term. The terms that are proportional to u, I will group together coming from here. u itself involved this integration over q1 q2 q3. And u involves this m i of q1 m i of q2 mj of q3 mj of minus q1 minus q2 minus q3. And I have to multiply it. So this is u. I have to multiply it by o. So I have m alpha of q m beta of q prime. So this is my o. This is my u. And I have to take the average of this term. But really, the average operates on the m's, so it will go over here. So that's the average of ou. I have to subtract from that the average of o average of u. So let me, again, write the next term. u will be the same bunch of integrations. I have to do average of o and then average of u. This completes my first correction coming from u, and then there will be corrections to first order coming from V. There will be second-order corrections coming from u squared, all kinds of things that will come into play. But the important thing, again, to realize is the structure. u is this thing that involves four factors of m. The averages are over the m, so I can take them within the integral. And so I have one case which is an expectation value of six m's. Another case, a product of two and a product of four. So that's why I said I would need to know how to calculate in the Gaussian theory product of various factors of m because my interaction term involves various powers of m that will be added to whatever expectation value I'm calculating perturbation theory. So how do I calculate an expectation that involves six factors-- certainly, it's even-- of m? I have to group-- make all possible pairings. So this, for example, can be paired to this. This can be paired to this. This can be paired to this. That's a perfectly well-defined average. But you can see that if I do this, if I pair this one, this one, this one, I will get something that will cancel out against this one. So basically, you can see that any way that I do averaging that involves only things that are coming from o and separately the things that come from u will cancel out with the corresponding-- oops. With the corresponding averages that I do over here. That c stands for connected. So the only things that survive are pairings or contractions that pick something that is from o and connect it to something that is from u. And the purpose of all of these other terms at all higher orders is precisely to remove pieces where you don't have full connections among all of the o's and the u's that you are dealing with. So let's see what this is. So I will show you that using connections that involve both o and u, I will have two types of contractions joining o and u. The first type is something like this. I will, again, draw all-- or write down all of the fours. So I have m alpha of q m beta of q prime m i of q1 m i of q2 mj of q3 mj of minus q1 minus q2 minus q3. I have to take that average. And I do that the average according to Wick's theorem as a product of contractions. So let's pick this m alpha. It has to ultimately be paired with somebody. I can't pair it with m beta because that's a self-contraction and will get subtracted out. So I can pick any one of these fours. As far as I'm concerned, all four are the same, so I have a choice of four as to which one of these four operators from u I connect to. So that 4 is one of the numerical factors that ultimately we have to take care of. Then, the two types comes because the next m that I pick from o I have two possibilities. I can connect it either with the partner of the first one that also carries index i or I can connect to one of the things that carries the opposite index j. So let's call type 1 where I make the choice that I connect to the partner. And once I do that, then I am forced to connect these two together. Now, each one of these pairings connects one of these averages. So I can write down what that is. So the first one connected an alpha to an i as far as indices were concerned. It connected q to q1, so I have 2 pi to the d a delta function q plus q1. And the variance associated with that, which is t plus k q squared, et cetera. The second pairing connects a beta to an i. So that's a delta beta i. And it connects q prime to q2. And so the variance associated with that is t plus k q prime squared and so forth. And finally, the third pairing connects j to itself j. So I will get a delta jj. And then I have 2 pi to the d. q3 to minus q1 minus q2 minus q3. So I will get minus q1 minus q2, and then I have t plus, say, k q3 squared and so forth. Now, what I am supposed to do is at the next stage, I have to sum over indices i and j and integrate over q1 q2 q3. So when I do that, what do I get? There is an overall factor of minus u. Let's do the indices. When I sum over i, delta alpha i delta beta i becomes-- actually, let me put the factor of 4 before I forget it. There is a factor of 4 numerically. Delta alpha i delta beta i will give me a delta alpha beta. When I integrate over q1, q1 is set to minus q. So this after the integration becomes q. When I integrate over q2, the delta function q2 forces minus q2 to be q prime. And through the process, two of these factors of 2 pi to the d disappear. So what I'm left with is 2 pi to the d. This delta function now involves q plus q prime. And then in the denominator, I have this factor of t plus k q squared. I have t plus k q prime squared. Although, q prime squared and q squared are the same. I could have collapsed these things together. I have one integration left over q3. And these two factors went outside the integral [INAUDIBLE] independent q3. The only thing that depends on q3 is t plus k q3 squared and so forth. So that was easy. AUDIENCE: I have a question. PROFESSOR: Yes. AUDIENCE: If you're summing over j, won't you get an n? PROFESSOR: Thank you very much. I forgot the delta jj. Summing over j, I will get a factor of n. So what I had written here as 4 should be 4n. Yes. AUDIENCE: This may be a question too far back. But when you write a correlation between two different m's, why do you write delta function of q plus q prime instead of q minus q prime? PROFESSOR: OK. Again, go back all the way to here when we were doing the Gaussian integral. I will have for the first one, q1. For the second m, I will write q2. So when I Fourier transform this term, I will have e to the i q1 plus q2 dot x. And then when I integrate over x, I will get a delta function q1 plus q2. So that's why I write all of these as absolute value squared because I could have written this as m of q m of minus q, but I realized that m of minus q is the complex conjugate of m of q. So all of these are absolute values squared. Now, the second class of contraction is-- again, write the same thing, m alpha of q m beta of q prime m i of q1 m i of q2, mj of q3 mj of minus q1 minus q2 minus q3. The first step is the same. I pick m alpha of q and I have no choice but to pick one of the four possibilities that I have for the operators that appear in u. But for the second one, previously I chose to connect it to something that was carrying the same index. Now I choose to carry it to something that carries the other index, j in this case. And there are two things that carry index j, so I have two choices there. And then I have the remaining two have to be connected. Yes? AUDIENCE: Just going back a little bit. Are you assuming that your integral over q3 converges because you're only integrating over the [INAUDIBLE] zone? PROFESSOR: Yes. AUDIENCE: OK. PROFESSOR: That's right. Any time I see a divergent integral, I have a reason to go back to my physics and see why physics will avoid infinities. And in this case, because all of my theories have an underlying length scale associated with them and there is an associated maximum value that I can go in Fourier space. The only possible singularities that I want to get are coming from q goes to 0. And again, if I really want to physically cut that off, I would put the size of the system. But I'm interested in systems that become infinite in size. So the first term for this way of contracting things is as follows. There are eight such terms. I should have really put the four here. There are eight such types of contractions. Then I have a delta alpha i 2 pi to the d delta function q plus q1 divided by t plus k q squared and so forth. The first contraction is exactly the same as before. The next contraction I connect i to j and q prime to q3. So I have a delta beta j 2 pi to the d delta function q prime going to q3 divided by t plus k q prime squared and so forth. And the last contraction connects an i to a j. Delta ij. I have 2 pi to the d. Connecting q2 to minus q1 minus q2 minus q3 will give me a delta function which is minus q1 minus q3. And then I have t plus k-- I guess in this case-- q2 squared and so forth. So once more sum ij. Integrate q1 q2 q3 and let's see what happens. So again, it's a term that is proportional to minus u. The numerical coefficient that it carries is 8. And there is no n here because when I sum over i, you can see that j is set to be the same as alpha. Then when I sum over j, I set alpha to be the same as beta. So there is just a delta alpha beta. When I integrate over q1, q1 is set to minus q. q3 is set to minus q prime. So this factor becomes the same as q plus q prime. And the two variances, which are in fact the same, I can continue to write as separate entities but they're really the same thing. And then the one integral that is left-- I did q1 and Q3-- it's the integral over q2, 2 pi to the d 1 over t plus K q2 squared and so forth. It is, in fact, exactly the same integral as before, except that the name of the dummy integration variable has changed from q2 to q3, or q3 to q2. So we have calculated m alpha of q m beta of q prime to the lowest order in perturbation theory. To the first order, what I had was a delta alpha beta 2 pi to the d delta function q plus q prime divided by t plus k q squared. Now, note that all of these factors are present in the two terms that I had calculated as corrections. So I can factor this out and write it as the correction as 1 plus or minus something. It is proportional to u. The coefficient is 4n plus 8, which I will write as 4 n plus 2. I took out one factor of t plus k q squared. There is one factor that's will be remaining. Therefore, t plus k q squared. And then I have one integration over some variable. Let's call it k. It doesn't matter what I call the integration variable. 1 over t plus k, k squared, and so forth. And presumably, there will be higher-order terms. Now again, I did the calculation specifically for the Landau-Ginzburg, but the procedure you would have been able to do for any field theory. You could have started with a part that you can solve exactly and then look at perturbations and corrections. Now, there is, in fact, a reason why this correction that we calculated had exactly the same structure of delta functions as the original one. And why I anticipate that higher-order terms, if I were to calculate, will preserve that structure. And the reason has to do with symmetries. Because quite generally, I can write for anything-- m alpha m beta of q prime without doing perturbation theory. Again, let's remember m alphas of q are going to be related to m of x by inverse Fourier transformation. So m alpha of q I can write an integral d dx e to the-- I guess by that convention, it has to be minus i q dot x. m alpha of x. And again, m beta of q prime I can write as minus i q prime dot x prime. And I integrate also over an x prime of m alpha of x m beta of x prime. Now, these are evaluated in real space as opposed to Fourier space. And the average goes over here. At this stage, I don't say anything about perturbation theory, Gaussian, et cetera. What I expect is that this is a function, that in a system that has translational symmetry, only depends on x minus x prime. m beta. Furthermore, in a system that has rotational symmetry that is not spontaneously broken, that is approaching from the high temperature side, then just rotational symmetry forces you that-- the only tensor that you have has to be proportional to delta alpha beta. So I can pick some particular component-- let's say m1-- and I can write it in this fashion. So the rotational symmetry explains the delta alpha beta. Now, knowing that the function that I'm integrating over two variables actually only depends on the relative position means that I can re-express this in terms of the relative and center of mass coordinates. So I can express that expression as e to the minus i q minus q prime x minus x prime over 2. And then I will write it as minus i q plus q prime x plus x prime over 2. If you expand those, you will see that all of the cross terms will vanish and I will get q dot x and q prime dot x prime. Yes. AUDIENCE: [INAUDIBLE]? PROFESSOR: Yes. Thank you. So now I can change integration variables to the relative coordinate and the center of mass coordinate rather than integrating over x and x prime. The integration over the center of mass, the x plus x prime variable, couples to q plus q prime. So it will immediately tell me that the answer has to be proportional to q plus q prime. I had already established that there is a delta alpha beta. So the only thing that is left is the integration over the relative coordinate of e to the minus i some q-- either one of them. q dot r. Since q prime is minus q, I can replace it with e to the i q dot the relative coordinate. m1 of r m1 of 0. So that's why for a system that has translational symmetry and rotational symmetry, this structure of the delta functions is really imposed for this type of expectation value. Naturally, perturbation theory has to obey that. But then, this is a quantity that we had encountered before. If you recall when we were scattering light out of the system, the amplitude of something that was scattered was proportional to the Fourier transform of the correlation function. And furthermore, in the limit where S is evaluated for q equal to 0, what we are doing is we're essentially integrating the correlation function. We've seen that the integrals of correlation function correspond to the susceptibilities. So you may have thought that what I was calculating was a two-point correlation function in perturbation theory. But what I was actually leading up to is to know what the result is for scattering from this theory. And in some limit of it, I've also calculated what the susceptibility is, how the susceptibility is corrected. And again, if you recall the typical structure that people see for S of q is that S of q is something like 1 over something like this. This is the Lorentzian line shapes that we had in scattering. And clearly, the Lorentzian line shape is obtained by Fourier transformation and expectation values of these expansions that we make. So it kind of makes sense that rather than looking at this quantity, I should look at its inverse. So I have calculated s of q, which is the formula that I have up there. So this whole thing here is S of q. If I calculate its inverse, what do I get? First of all, I have to invert this. I have t plus k q squared, which is what would have given me the Lorentzian if I were to invert it. And now we have found the correction to the Lorentzian if you like, which is this object raised to the power of minus 1. But recall that I've only calculated things to lowest order in u. So whenever I see something and I'm inverting it just like I did over here, I better be consistent to order of u. So to order of u, I can take this thing from the numerator, bring it to the num-- from denominator to numerator at the expense of just changing the sign. Order of u squared that we haven't really bothered to calculate. So now it's nice because you can see that when I expand this, this factor will cancel that factor. So the inverse has the structure that we would like. It is t plus something that is a constant, doesn't depend on q, 4 n plus 2 u. Well, actually, no. Yeah. Because this denominator gets canceled. I will get 4 n plus 2 u integral over k 2 pi to the d 1 over t plus k k squared and so forth. And then I have my k q squared. And presumably, I will have higher-order terms both in u and higher powers of q, et cetera. And in particular, the inverse of the susceptibility is simply the first part. Forget about the k q squared. So the inverse of susceptibility is t plus 4 n plus 2 u integral d dk 2 pi to the d 1 over t plus k k squared and so forth. Plus order of things that we haven't computed. So why is it interesting to look at susceptibility? Because susceptibility is one of the quantities-- it always has to be positive-- that we were associating previously with singular behavior. And in the absence of the perturbative correction from the Gaussian, the susceptibility we calculated many times. It was simply 1 over t. If I had added a field, the field h would have changed the free energy by an amount that would be h squared over 2t as we saw. Take two derivatives, I will get 1 over t for the susceptibility. So the 0 order sustainability that I will indicate by chi sub 0 was something that was diverging at t equals to 0. And we were identifying the critical exponent of the divergence as gamma equals to 1. So here, I would have added gamma 0 equals to 1. Because of the linear divergence-- and the linear divergence can be traced back to the linearity of the vanishing of the inverse susceptibility at temperature. Now, let's see whether we have calculated a correction to gamma. Well, the first thing that you notice that if I evaluated the new chi inverse at 0, all I need to do is to put 0 in this formula. I will get 4 n plus 2 u. This integral d dk 2 pi to the d 1 over k k squared. I set t equals to 0 here. Now this is, indeed, an integral that if I integrate all the ay to infinity would diverge on me. I have to put an upper cutoff. It's a simple integral. I can write it as integral 0 to lambda dk k to the d minus 1 with some surface of a d dimensional sphere. I have this 2 pi to the d out front and I have a k squared here. I can put the k out here. So you can see that this is an integral that's just a power. I can simply do that. The answer ultimately will be 4 n plus 2 u Sd 2 pi to the d. There is a factor of 1 over k that comes into play. The integral of this will give me the upper cutoff to the d minus 2 divided by d minus 2. So what we find is that the corrected susceptibility to lowest order does not diverge at t equals to 0. Its inverse is a finite value. So actually, you can see that I've added something positive to the denominator so the value of susceptibility is always reduced. So what does that mean? Does that mean that the susceptibility does not have a singularity anymore? The answer is no. It's just that the location of the singularity has changed. The presence of u m to the fourth gives some additional stiffness that you have to overcome. t equals to 0 is not sufficient for you. You have to go to some other point tc. So I expect that this thing will actually diverge at a new point tc that is negative. And if it diverges, then its inverse will be 0. So I have to solve the equation tc plus 4 n plus 2 u integral d dk 2 pi to the d of 1 over tc plus k k squared and so forth. So this seems like an implicit [INAUDIBLE] equation in tc because I have to evaluate the integral that depends on tc, and then have to set that function to 0. But again, we have calculated things only correctly to order of u. And you can see already that tc is proportional to u. So this answer here is something that is order of u presumably. And a u compared to the u out front will give me a correction that is order of u squared. So I can ignore this thing that is over here. So to order of u, I know that tc is actually minus what I had calculated before. So I get that tc is minus this 4 n plus 2 u k Sd lambda to the d minus 2 2 pi to the d d minus 2. It doesn't matter what it is, it's some non-universal value. Point is that, again, the location of the phase transition certainly will depend on the parameters that you put in your theory. We readjusted our theory by putting m to the fourth. And certainly, that will change the location of the transition. So this is what we found. The location of the transition is not universal. However, we kind of hope and expect that the singularity, the divergence of susceptibility has a form that is universal. There is an exponent that is characteristic of that. So asking the question of how these corrected chi divergence diverges at this tc is the same as asking the question of how its inverse vanishes at tc. So what I am interested is to find out what the behavior of chi inverse is in the vicinity of the point that it goes to 0. So basically, what this singularity is. This is, of course, 0. By definition, chi inverse of tc is 0. So I am asking how chi vanishes its inverse when I approach tc. So I have the formula for chi once I substitute t, once I substitute tc and I subtract them. To lowest order I have t minus tc. To next order, I have this 4 u n plus 2 integral over k 2 pi to the d. I have for chi inverse of t 1 over t plus k k squared minus 1 over tc plus k k squared. And terms that I have not calculated are certainly order of u squared. Now, you can see that if I combine these two into the same denominator that is the product, in the numerator I will get a factor of t minus tc. The k q squareds cancel. So I can factor out this t minus tc between the two terms. Both terms vanish at t equals to tc. And then I can look at what the correction is to one, just like I did before. The correction is going to-- actually, this will give me tc minus t, so I will have a minus 4 u n plus 2 integral d dk 2 pi to the d. The product of two of these factors, t plus k-- tc plus k k squared t plus k k squared, and then order of u squared. OK? Happy with that? Now again, this tc we've calculated is order of u. And consistently, to calculate things to order of u, I can drop that. And again, consistently to doing things to order of u, I can add a tc here. And that's also a correction that is order of u. And this answer would not change. The justification of why I choose to do that will become apparent shortly, but it's consistent that this is left. So what I find at this stage is that I need to evaluate an integral of this form. And again, with all of these integrals, we better take a look as to what the most significant contribution to the integral is. And clearly, if I look at k goes to 0, there are various factors out there that as long as t minus tc is positive, I will have no worries because this k squared will be killed off by factors of k to the d minus 1 in dimensions above 2. But if I go to large k-values, I find that large k-values, the singularity is governed by k to the power of d minus 4. So as long as I'm dealing with things that have some upper cutoff, I don't have to worry about it even in dimensions greater than 4. In dimensions greater than 4, what happens is that the integral is going to be dominated by the largest values of k. But those largest values of k will be cutoff by lambda. The answer ultimately will be proportional to 1 over k squared, and then k to the power of d minus 4 replaced by lambda to the d minus 4-- various overall coefficient of d minus 4 or whatever. It doesn't matter. On the other hand, if you go to dimensions less than 4-- again, larger than 2, but I won't write that for the time being-- then the behavior at large k is perfectly convergent. So you are integrating a function that goes up, comes down. You can extend the integration all the way to infinity, end up with a definite integral. We can rescale all of our factors of k to find out what that definite integral is dependent on. And essentially, what it does is it replaces this lambda with the characteristic value of k that corresponds roughly to the maximum. And that's going to occur at something like t minus tc over k to the power of 1/2. So I will get d minus 4 over 2. There is some overall definite integral that I have to do, which will give me some numerical coefficient. But at this time, let's forget about the numerical coefficient. Let's see what the structure is. So the structure then is that chi inverse of t, the singularity that it has is t minus tc to the 0 order. Same thing as you would have predicted for the Gaussian. And then we have a correction, which is this minus something that goes after all of these things with some coefficient. I don't care what that coefficient is. u n plus 2 divided by k squared. And then multiplied by lambda to the power of d minus 4, or t minus tc over k to the power of d minus 4 over 2. And then presumably, higher-order terms. And whether you have the top or the bottom will depend on d greater than 4 or d less than 4. So you see the problem. If I'm above four dimensions, this term is governed by the upper cutoff. But the upper cutoff is just some constant. So all that happens is that the dependence remains as being proportional to t minus tc. The overall amplitude is corrected by something that depends on u. You are not worried. You say that the leading singularity is the same thing as I had before. Gamma will stay to be 1. I try to do that in less than four dimensions and I find that as I approach tc, the correction that I had actually itself becomes divergent. So now I have to throw out my entire perturbation theory because I thought I was making an expansion in quantity that I can make sufficiently small. So in usual perturbation theory, you say choose epsilon less than 10 to the minus 100, or whatever, and then things will be small correction to what you had at 0 order. Here, I can choose my u to be as small as I like. Once I approach tc, the correction will blow up. So this is called a divergent perturbation theory. Yes. AUDIENCE: So could we have known a priori that we couldn't get a correction to gamma from the perturbation theory because the only way for gamma to change is for the correction to have a divergence? PROFESSOR: You are presuming that that's what happens. So indeed, if you knew that there is a divergence with an exponent that is larger than gamma, you probably could have guessed that you wouldn't get it this way. Let's say that we are choosing to proceed mathematically without prior knowledge of what the experimentalists have told us, then we can discover it this way. AUDIENCE: I was thinking if you're looking for how gamma changes due to the higher-order things, if we found that our perturbation diverged with a lower exponent than gamma, then the leading one would still be there, original gamma would be gone. PROFESSOR: Yes. AUDIENCE: And then if it's higher, then we have the same problem. PROFESSOR: That's right. So the problem that we have is actually to somehow make sense of this type of perturbation theory. And as you say, it's correct. We could have actually guessed. And I'll give you another reason why the perturbation theory would not have worked. But the only thing that we can really do is perturbation theory, so we have to be clever and figure out a way of making sense of this perturbation theory, which we will do by combining it with the normalization group. But a better way or another way to have seen maybe why this does not work is good old-fashioned dimensional analysis. I have within the exponent of the weight that I wrote down terms that are of this formula, t m squared k gradient of m squared u m to the fourth and so forth. Since whatever is in the exponent should be dimensionless-- I usually write beta H for example-- we know that this t has some dimension. Square the dimension of m multiplied by length to the d. This should be dimensionless. Similarly, k m squared again. Because of the gradient l to the d minus 2, that combination should be dimensionless. And my u m to the fourth l to the d should be dimensionless. So we can get rid of the dimensions of m by dividing, let's say, u m to the fourth with the square of k m squared. So we can immediately see that u divided by k squared, I get rid of the dimensions of m. l to the power of d l to the 2d minus 4, giving me l to the 4 minus d is dimensionless. So any perturbation theory that I write down ultimately where I have some quantity x, which is at 0 order 1, and then I want to make a correction where u appears, I should have something, u over k squared, and then some power of length to make the dimensions work out. So what lengths do I have available to me? One length that I have is my microscopic length a. So I could have put here a to the power of 4 minus d. But there is also an emergent length in the problem, which is the correlation length. And there is no reason why the dimensionless form that involves the correlation length should not appear. And indeed, what we have over here to 0 order, our correlation length had the exponent 1/2 divergence. So this is really the 0 order correlation length that is raised to the power of 4 minus d. So even before doing the calculation, we could have guessed on dimensional ground that it is quite possible that we are expanding in u, we think. But at the end of the day, we are expanding in u c to the power of 4 minus d. And there is no way that that's a small quantity on approaching the phase transition. And that hit us on the face and also is the reason why I replaced this t over here with t minus tc because the only place where I expect singularities to emerge in any of these expansions is at tc. I arranged things so they would appear at the right place. So should we throw out perturbation theory completely since the only thing that we can do is really perturbation theory? Well, we have to be clever about it. And that's what we will do next lectures. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 24_Dissipative_Dynamics.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So we were talking about melting in two dimensions, and the picture that you had was something like a triangular lattice, which at zero temperature has particles sitting at precise sites-- let's say, on this triangular lattice-- but then at finite temperature, the particles will to start to deform. And the deformations were indicated by a vector u. And the idea was that this is like an elastic material, as long as we're thinking about these long wavelength deformations. u and the energy costs can be written for an isotropic material in two dimensions in terms of two invariants. And traditionally, it is written in terms of the so called lame coefficients, mu and lambda. Where this uij, which is the strain, is obtained by taking derivatives of the deformation, the iuj, and symmetrizing it. This symmetrization essentially eliminates an energy at a cost for rotations. And then because of this simple quadratic translation of invariant form, we could also express this in terms of fullier mode. And I'm going to write the fullier description slightly differently than last time. Basically, this whole form can be written as u plus 2 lambda over 2 q dot u tilde of q squared. And the other term-- other than previously I had written things in terms of q dot u and q squared u squared-- we write it in terms of q crossed with u tilde of q squared. Essentially, you can see that this ratifies that they're going to have modes that are in the direction of q, the longitudinal modes. Cost is nu plus 2 lambda, and those that are transfers or orthogonal to the direction of q, whose cost is just mu. And clearly if I were to go into real space, this is kind of related to a divergence of u. And the divergence of u corresponds to essentially squeezing or expanding this deformation. So what these measures is essentially the cost of changing the density. And this combination is related to the bark modulus. You have that even for a liquid. So if you have a liquid, you try to squeeze it. There will be a bulk energy cost. And this term, which in the real space is kind of related to kern u, you would say is corresponding to making the rotations. So if you try to rotate this material locally, then the corresponding sheer cost of the formation has a cost that is indicated by mu, the sheer modulus. And basically what really makes a solid is this term. Because as I said, a liquid also has the bark modulus, but lacks the resistance to try to sheer it, which is captured by this, that is unique and characteristic of a solid. So this is the energy cost. The other part of this whole story is that this structure has order. And we can characterize that order which makes it distinct from a liquid or gas a number of ways. One was to do an x-ray scattering, and then you would see the back peaks. And really that type of order is translational. And you characterize that by an order parameter. It's kind of like a spin that you have in the case of a magnet being up or down. In this case, this object was e to the i g dot u-- the deformation that you have that's on location r. And then these g's are chosen to be the inverse lattice vectors. It doesn't really matter whether I write here u of r or the actual position. Because the actual positions starts at zero temperature, we devalue r 0, such that the dot product of that g is a multiple of 2pi. And so essentially, that's what captures this. Clearly, if I start with a zero temperature picture and just move this around, the phase of this order parameter over here will change, but it will be the same across the system. And so this is long range correlation that is present at zero temperature, you can ask what happens to it at finite temperature. So we can look at the row g at some position, row g star at some other position. And so that was related to exponential of minus g squared over 2-- something like u squared x, or u of x minus 0 squared. And what we saw was that this thing had a characteristic that it was falling off with distance according to some kind of power law. The exponent of this power law, when calculated, clearly is related to this g squared. Because this is the quantity that goes logarithmically. And so the answer was g squared over 4 pi. Heat was dependent on these two modes being present. So you have nu 2 nu plus lambda, and then 2 nu plus lambda. You had a form such as this. Now this result was obtained as long as we were treating this field, u, as just the continuum field that satisfies this. And this result is really different, also, from the expectation that at very high temperature the particle in a liquid should not know anything about the particle further out in the liquid, as long as they're beyond some small correlation links. So we expect this to actually decay exponentially at high temperatures. And we found that we could account for that by addition of these locations, can cause a transition to a high temperature phase in which row g, row g star, between x and 0, decays exponentially. As opposed to this algebraic behavior, indicating that these locations-- once you go to sufficiently high temperature, such that the entropy of creating and rearranging these dislocations overcomes the large cost of creating them in the first place, then you'll have this absence of translational order, and some kind of exponential decay of this order parameter. So at this stage, you may feel comfortable enough to say that addition of these dislocation causes our solid to melt and become a liquid. Now, I indicated, however, that the sun also has an orientational role. What I could do is-- at each location in the solid, I can ask how much has the angle been deformed, and look at the bond angle. So maybe this particle moved here, and this particle moved here. Somewhere else, the particles may have moved in a different fashion. And the angle that was originally, say, along the x direction, had rotated somewhere else. And clearly, again, at zero temperature, I can look at the correlations of this angular order, and they would be the same across the system. I can ask what happens when I include these deformations and then the dislocations. So in the same way that we defined the translational order parameter, I can define an orientational order parameter. Let's call it sci at some location, r, which is e to the i. Theta at that location r-- Except that when I look at the triangular lattice, it may be that the triangles have actually rotated by 60 degrees or 120 degrees. And I can't really tell whether I clicked once, zero times, twice, et cetera. So because of this symmetry of the original lattice on their on their theta going to theta plus 2 pi over 6, I have to use something like this that will not be modified if I make this transformation, even at zero temperature. If I miscount some angle by 60 degrees, this will become fine. Now I want to calculate the correlations of this theta from one part of this system to another part of the system. So for that, what I need to do is to look at the relationship between theta and the distortion field, u, that I told you before. Now you can see that right on the top right corner I took the distortion field, and I took it's derivative, and then symmetrized the result in pencil. And that symmetrization actually removes any rotation that I would have. So in order to bring back the notation, I just have to put a minus sign. And indeed, one can show that the distortion or displacement u or r across my system-- let's call it u of x-- leads to a corresponding angular distortion, theta, at x, which is minus one half-- let's call it z hat dotted with curve of u. So if, rather than doing the i u j plus d j u i, if I put a minus sign, you can see that I have the structure of a curve. In two dimensions, actually curve would be something that would be pointing only along the z direction. And so I just make a scale on my dot, without taking that in the z direction. And so you can do some distortion, and convince yourself that for each distortion you will get an angle that is this. AUDIENCE: Do we need some kind of thermalization to fix the dimensions of this? Because that can go u has dimensions of fields, and u-- PROFESSOR: I'm only talking about two dimensions. And in any case, you can see that u is a distortion-- is a displacement-- the gradient is reduced by the displacement, so this thing is dimensionless as long as you have these dimensions. AUDIENCE: Sorry. PROFESSOR: Yes? AUDIENCE: That's a 2, right? Not a c? PROFESSOR: That's a 2. It's the same 2 that I have for the definition of the strain. Rather than a plus, you put a minus. AUDIENCE: So can we think of these as two sets of Goldstone modes, or is that not a way to interpret it? Is it like two order parameters? I mean, you have a think that has u dependence, but-- PROFESSOR: OK, so let's look at this picture over here. You do have two sets of Goldstone modes corresponding to longitudinal transfers. You can see that this curve is the thing is that I call the angle. So if you like, you can put the angle over here. But the difference between putting an angle here, and this term, is that in terms of the angle, there is no q dependence. So it is not a ghost. Because the cost of making a distortion of wave number q does not vanish as q squared works. All right, so then I can look at the correlation between, say, sci of x, sci star of zero. And what I will be calculating is expectation value of e to the i 6. And then I will have this factor of-- So let me write it in this fashion. Theta of x minus theta of 0. Since u is Gaussian distributed, theta in the Gaussian distributor. So for any Gaussian distributed entity, we can write the exponential of e to the something as its average as exponential of minus 1/2 the average of whatever is in the exponent. So I will get 36 divided by 2. I do have the expectation value of delta theta squared. But delta theta is related up to this factor of 1/4 to some expectation value of kern u. So I would need to calculate kern mu at x minus kern u at 0, the whole thing squared with the Gaussian average. Now, this entity-- clearly what I can do is to go back and look at these things in terms of Fourier space, rather than position space. So this becomes an integral d 2 q 2 pi to the d. I will get e to the i q dot x minus 1. And then I have something like q cross u tilde of q. And I have to do that twice. When I do that twice, I find that the different q's are uncorrelated. So I will get, rather than two of these integrals, one of these integrals. And because the q and q prime are said to be the same, the product of those two factors will be the integral 2 minus 2 cosine of q dot x term that we are used to. And so that's where the x dependence appears. And then I need the average of q cross mu of q. And that I can read off the beta [INAUDIBLE], root the energy over here. You can see that there is a Gaussian cost for q plus u of q squared, which is simply 1 of a lingering variance. So basically, this term you'll the sum of 1 over u. Now the difference between all of the calculations that we were doing previously, as was asked regarding Goldstone modes-- if I was just looking at u squared, which is what I was doing up here, I would need to put another factor of 1 over q squared [INAUDIBLE]. And then I would have the coulomb integral that would grow logarithmically. But here you can see that the whole thing-- the cosine integrated against the constant-- will average out to 0. So I will think you have 2 over u times this integral is a constant. So the whole thing, at the end of the day, is exponential of-- that becomes a 9 divided by 2. There's a factor of 1 over the mu, and then I have twice the integral of d 2 q over 2 pi squared. Which is-- you can convince yourself simply the density of the system a number of times. So as opposed to the translational order, which was decaying as above our lot, then we include the phonon modes. When we include these phonon modes, we find that the orientational order decays much more weakly. So that was falling off as I went further and further. This, as I go further and further, eventually reaches a constant that is less than 1, but it is something. Using conversely proportional to temperature-- so as I go to 0 temperature, these go to 1. And basically because this order parameter, with respect to-- well, this measure of distortion with respect to that measure of distortion has an additional factor of gradient. I will get an additional factor of q squared, and then everything changes accordingly. So orientational order is much more robust. This phase that we were calling the analogue of a two dimensional solid had only quasi long range order. The long range order was decaying as a power law. Yes? AUDIENCE: Is n dependent on the position, or-- PROFESSOR: No. So basically, if you were to remember the number of points it should be the same as the number of allowed fullier modes. And this goes to an integral-- 2 q over 2 pi squared-- when I put the area in two dimensions. So the integral over whatever [INAUDIBLE] zone you have over the fullier modes is the same thing as the number of points that you have in the original lattice, divided by area, or 1 over the size of one of those triangles squared. Yes? AUDIENCE: Where is the x dependent in that expression? PROFESSOR: OK, the x dependence basically disappears because you integrate over the cosine of q x. And if x is sufficiently large, those fluctuations disappear. AUDIENCE: Oh, so we're really looking at the [INAUDIBLE]. PROFESSOR: Yes, that's right. So at short distances, there are going to be some oscillations or whatever. But it gradually-- we are interested in the long distance behavior. At very short distances, I can't even use the continuum description for things that are three or lattice spacings apart. So maybe I should explicitly say that this is usually called quasi long range order, versus this dependence, which is two long ranges. So given that this is more robust than these forum-like fluctuations, the next question is, well does it completely disappear when I include these locations. So again, this calculation, based on Gaussian's, relies on just the fullier modes of that line that I have up there. It does not include the dislocations, which, in order to properly account, you saw that we need to look at a collections of these locations appearing at different positions on the lattice. And they had these vectorial nature of the fullier interactions among them. So presumably, when I go into the base where these locations unbind-- and by unbinding-- as I said, in the low temperature picture of the dislocations, they should appear very close to each other because it is costly to separate them by an amount that grows invariably in the separation. In the unbound phase, you have essentially a gas of dislocations that can be anywhere. So the picture here now is that indeed this is a phase, that if I just focus on the dislocations, there is a whole bunch of them. In a triangular lattice, they could be pointing in any one of three directions, plus or minus. And then there is certainly an additional contribution to the angle that comes from the presence of these dislocations. So you calculate-- if you have a dislocation that has inverse b, let's say at the origin, what kind of angular distortion does it cause. And you find that it goes like v dot x, divided by the absolute value of x. This is for one dislocation. This is the theta that you would get for that dislocation at location x. Essentially, you can see that if I were to replace the u that I have here with the u that was caused by dislocation, you would get something like this formula. Because remember the u that was caused by dislocation was something like the gradient of the log potential. It's kind of hard to work, but maybe I'll make an attempt to write it. So let's take a gradient of theta. Gradient of theta, if I use that formula, you would say, OK, I have minus 1/2 z hat dot kern of something. And if I take a gradient of the kern, the answer should be 0. But that's as long as this u is a well defined object. And our task was to say that this u, then you have these dislocations, is not a well defined object, in the sense that you take the kern, and the gradient then you would get 0. So essentially, I will transport the gradient all the way over here, and the part of u that will survive that is the one that is characterized by this dislocation feat. Now, you can see that this object kind of looks like a Laplacian of this distortion. It's two the derivatives of this distortion field that had this logarithm in it. And when you take two derivatives of a logarithm, you get the delta function. So if you do things correctly, you will find that this answer here becomes a sum over i v i delta function of x minus xi. So basically, each dislocation at location x i-- again, depending on its v being in each direction-- gives a contribution to the gradient of theta. And if I were to take the gradient of the expression that I have over here, the gradient of this object is also-- this is like the field that you have for the logarithmic potential-- will give you the data function. So that's where the similarity comes. So the full answer comes out to be-- if you have a sum over the dislocations, the sum over the distortion fields that each one of them is causing-- and you will have a form such as this. Yes? AUDIENCE: Should the denominator be squared? PROFESSOR: Yes, that's right. The potential goes logarithmically. The field, which is the gradient of the potential, falls off as 1 over separation. So since I put the separation out there, I have to put the separation squared. So you can see that the singular part, the part that arises from dislocations-- if I have a soup of dislocations, I can figure out what theta is. Now what I did look for-- actually, I was kind of hinting at that-- if I take the gradient of theta-- and I forgot to put the factor of 1/2pi here-- does the 4 vertices that had charged to pi-- I had the potential. That was 1/r. So for dislocations it becomes d/2pi. If I take the gradient, then the gradient translates to sum over pi, the i data function of x minus x i-- the expression that I have written over there. And if I do the fullier transform, you see what I did over here was essentially to look at theta in fullier space. So let's do something similar here. So when I do the fullier transform of this, I will get pi q-- the fullier transform of this angular feat. And on the right hand side, what I would get is essentially the fullier transform of the field of dislocations. So I have defined my v of q to be sum over i into the i q dot position of the i dislocation, the vector that characterizes the dislocation. And it would make sense to also tap into the normalization that gives 1 over the square root of area. If you don't do that, then at some other point you have to worry about the normalization. So if I just multiply both sides by q-- and I think I forgot the minus sign throughout, which is not that important-- but theta tilde of q becomes i q dot b of q, divided-- maybe I should've been calling this b tilde-- divided by q squared. So this is important. Essentially, you take the collection of dislocations in this picture and you calculate what the fullier transform is, call that the tilde of q. Essentially, you divide by 1 factor of q, and you can get the corresponding angle of feat. Now what I needed to evaluate for here was the average of theta tilde of q squared. And you can see that if I write this explicitly, let's say, q i for be tilde i 4, then the two of them I will get q beta b tilde of beta. And then I would have a q to the side. And the average over here becomes the average over all contributions of these dislocations that I can put across my system. Now, explicitly I'm interested in the limit where q goes to 0. So these things depend on q. What I'm interested in is the limit as q goes to 0, especially what happens to this average. It becomes-- multiplying two of these things together-- actually, in the limit where q goes to 0, what I have is the sum over all of the b's. So in the limit where q goes to 0, this becomes an integral or sum. It doesn't matter which one of them I write. q has gone to 0, so I basically need to look at the average of the alpha of x, the beta of x [INAUDIBLE], divided by area. So what is there in the numerator? We can see that in the numerator, sq goes to 0. What I'm looking at is the sum of all of these dislocations that I have in the system. Now the average up the sum is 0, because in all of our calculations, we've been restricting the configurations that we moved from. Because if I go beyond that strategy, it's going to cost too much. But what I'm looking at is not the average of b, which is 0, but the average of b squared, which is the variance. So essentially I have a system that has a large area, a. It is on average neutral. And the question is, what is the variance of the net charge. And my claim is that the variance of the net charge is, by central limit theorem, proportional to the area-- actually, it is proportional to the units that are independent from each other. So roughly I would expect that in this high temperature phase, I have a correlation that is c. And within each portion of side c, will be neutral. But when I go within things that are more than c apart, there's no reason to maintain the strategy. So overall I have something like throwing coins, but at each one of them, the average is 0, with probability being up or down. But when I look at the variance for the entire thing, the average will be proportional to the area in units of these things that are independent of each other. It was from the normalization factor of 1 over area. And these, really, I should write as a proportionality, because I don't know precisely what the relationship between these independent sides that correlation [INAUDIBLE]. But they have to be roughly proportional. So what do you compute? You compute that the limit as q goes to 0, of the average of my theta tilde of q squared is a structure such as this. I forgot to put one more thing here. I don't expect to be any correlations between the x component and the y component of this answer-- the variance, the covariance of the dislocations in one direction and the other direction-- so I put the delta function there. If I put this over here, I would get the q squared divided by q to the 4th. So I will get a 1 over4 q squared. And I have the c squared and then some unknown coefficient up here. So it's interesting, because we started without thinking about dislocations, just in terms of the distortion field. And we said that this object is related to the angle. And indeed, we had this distortion, that energy cost of distortions is proportional to angle squared. And that angle, therefore, is not the Goldstone mode because it doesn't go like q squared. Now we go to this other phase now, with dislocations all over the place, and we calculate the expectation value of theta squared. And it looks like it came from a theory that was like Goldstone modes. So you would say that once I am in this phase, where the dislocations are unbound, there is an effective energy cost for these changes in angle that is proportional to the radiant of the angle squared. So that means fullier space, this would go to k a over 2, integral into q 2 pi squared, q squared theta tilde of q squared. So that if you had this theory, you would definitely say that the expectation value of theta tilde of q squared is 1 over k a q squared. The variance is k a q squared invers. You compare those two things and you find that once the dislocations have unbound, and there is a correlation lend that essentially tells you how far the dislocations are talking to each other and maintaining neutrality, that there is exactly an effective stiffness, like a Goldstone note, for angular distortions, that is proportional to c squared. And hence, if I were to look at the orientation of all their correlations, I would essentially have something like expectation value of theta q squared, which is 1 over q squared. If I fully transform that, I get the log. And so I will get something that falls off in the distance to some other exponent, if I recall [INAUDIBLE]. If I have a true liquid-- in a liquid, again, maybe in a neighborhood of seven or eight particles, neighbors, et cetera, they talk to each other and the orientations are correlated. But then I go from one part of the liquid to another part of the liquid, there is no correlation between bond angles. I expect these things to decay exponentially. So what we've established is that neither the phonon nor the dislocations are sufficient to give the exponential decay that you expect for the bond. So this object has quasi long range order, versus what I expect to happen in the liquid, which is exponential of minus x over psi. So the unbinding of dislocations gives rise to the new phase of matter that has this quasi long range order in the orientations. It has no positional order. It's a kind of a liquid crystal that is called a hexatic. Yes? AUDIENCE: So your correlation where you got 1 over k q squared, doesn't that assume that you're allowing the angle to vary in minus [INAUDIBLE] when you do your averaging? What about the restriction-- PROFESSOR: OK, so what is the variance of the angle here? There's a variance of the angle that is controlled by this 1 over k a. So if I go back and calculate these in real space, I will find that if I look at theta at location x minus theta at location 0, the answer is going to go like 1 over k logarithm x. So what it says is that if things are close enough to each other-- and this is in units of 1/a-- up to some factor, let's say log 5, et cetera. So I don't go all the way to infinity. The fluctuations in angle are inversely set by a parameter that we see as I approach right after the transition is very large. So in the same sense that previously for the positional correlations I had the temperature being small and inverse temperature being large, limiting the size of the translational fluctuations, here the same thing happens for the bond angle fluctuations. Close to the transitions, they are actually small. So the question that you asked, you could have certainly also asked over here. That is, when I'm thinking about the distortion field, the distortion field is certainly going to be limited. If it becomes as big as this, then it doesn't make sense. So given that, what sense or what justification do I have in making these Gaussian integrals? And the answer is that while it is true that it is fluctuating, as I go to low temperature, the degree of fluctuations is very small. So effectively what I have is that I have to integrate over some finite interval a function that kind of looks like this. And the fact that I replaced that with an integration from minus infinity to infinity rather than from minus a to a just doesn't matter. So we know that ultimately we should get this, but so far we've only got this, so what should we do? Well, we say OK, we encountered this difficulty before in something that looked like an angle in the xy model-- that low temperature had power-law decay, whereas we knew that at high temperatures they would have to have exponential decay. And what we said was that we need these topological defects in angle. So what you need-- topological defects-- or in our case, theta is a bond angle. And these topological defects in the bond angle have a name. They're called disconnections. And very roughly they correspond to something like this. Suppose this is the center one of these discriminations, and then maybe next to this, here I have locally at the distance r-- if I look at a point, I would see that the bonds that connect it to its neighbor have an orientation such as the one that I have indicated over here. Now what I want to do is, as I go around and make a circuit, that this angle theta that I have here to be 0, rotates and comes back up to 60 degrees. So essentially what I do is I take this line and I gradually shift it around so that by the time I come back, I have rotated by 60 degrees. It's kind of hard for me to draw that, but you can imagine what I have to do. So what I need to do is to have the integral over a circuit that encloses this discrimination such that when I do a d s dotted with the gradient of the bond orientational angle, I come back to pi over 6 times some integer. And again, I expect the [INAUDIBLE] dislocations that correspond to minus plus 1. Then the cost of these is obtained by taking this distortion fee, gradient of theta, whose magnitude at a distance r from the center of this object is going to be 1 over 2 pi r times pi over 6 times whatever this integer n is. And then if I substitute this 1 over r behavior in this expression, which is the effective energy of this entity, I will get the logarithmic cost for making a single disclination. Which means that at low temperature, I have to create disclination pairs. And then there will be an effective interaction between disclination pairs, that is [INAUDIBLE] in exactly the same way that we calculated for the x y model. So up to just this minor change that the charge of a refect is reduced by a factor of six, this theory is identical the theory of the unbinding of the x y model [INAUDIBLE] defects. Yes? AUDIENCE: Why is it pi over 6 and not 2 pi over 6? PROFESSOR: You are right. It should be 2 pi over 6. Thank you. Yes? AUDIENCE: So when you were saying that the-- so the distance of this hexatic phase would require the dislocations to occur to [INAUDIBLE] [INAUDIBLE] orientational defects. Is there an analogous case where-- I guess you can't have dislocations in the orientation without the dislocation-- PROFESSOR: So if you try to make these objects in the original case, in the origin of lattice, you will find that their cost grows actually like l squared log l, as opposed to dislocations, whose cost only grows as log l. So these entities are extremely unlikely to occur in the original system. If you sort of go back and ask what they actually correspond to, if you have a picture that you have generated on the computer, they're actually reasonably easy to identify. Because the centers of these disclinations correspond to having points, that have, rather than 6 neighbors, 5 or 7 neighbors. So you generate the picture, and you find mostly you have neighborhoods with 6 neighbors, and then there's a site where there's 5 neighbours, and another site that's 7 neighbors. 5 and 7 come more or less in pairs, and you can identify these disclination pairs reasonably easy. So at the end of the day, the picture that we have is something like this. We are starting with the triangular lattice that I drew at the beginning, and you're increasing temperature. We're asking what happens. So this is 0 temperature. Close to 0 temperature, what we have is an entity that has translational quasi long range order. So this quantity goes like 1 over x to this power a to g. Whereas the orientations go to a constant. Now, this a to g is there because there's a shear modulus. And so throughout this phase, I have a shear modulus. The parameter that I'm calling u, I had scaled inversely with temperature. So I have this shear modulus u that diverges once we scale by temperature as 1 over temperature. But then as I come down, the reduction is more than one over temperature because I will have this effect of dislocations appearing in pairs, and the system becomes softer. And eventually you will find that there's a transition temperature at which the shear modulus drops down to 0. And we said that near this transition, there is this behavior that mu approaches mu c, whatever it is, with something-- let's call this t 1. T 1 minus t to this exponent mu bar which was planned to be 6963. Now, once we are beyond this temperature t 1, then our positional correlations decay exponentially at some correlation, like c. And this c is something that diverges on approaching this transition. So basically I have a c that goes up here to infinity. And the fact that if we calculate the c, it diverges according to this strange formula that was 1 over t minus t 1 to this exponent mu bar Very strange type of divergence. But then, associated with the presence of this c is the fact that when you look at the orientational correlations, they don't decay as an exponential but as a power-law 8 of c. And this 8 of c is related to this k a, and falls off as 1 over c squared. So here it diverges as you approach this transition. Now, as we go further and further on, the disclinations will appear-- disclinations with [INAUDIBLE] resolve of the angles to be parallel to each other. And there's another transition that is [INAUDIBLE], at which this is going to suddenly go down to 0. And close to here, we have that a to c reaches the critical value of 1/4 v to square root of-- let's call it t 2-- v the square root singularity. And then finally we have the ordinary liquid phase, where additionally I will find that psi 6 of x psi star 6 of 0 decays exponentially. Let's call it psi 6. And this psi 6 is something that will diverge of this transition as an exponential of minus 1 over square root of t minus t2. So this is the current scenario of how melting could occur for a system of particles in two dimensions. If it is a continuous phase transition, it has to go through these two transitions with the intermediate exotic phase. Of course, it is also possible-- and typically people were seeing numerically when they were doing hearts, spheres, et cetera, that there is a direct transition from here to here, which is discontinuous, like you have in three dimensions. So that's an area of a discontinuous transition that is not ruled out. But if you have continuous transitions, it has to have this intermediate phase in [INAUDIBLE]. Any questions? Yes? AUDIENCE: [INAUDIBLE] so the red one is mu. The yellow one is theta psi, and the purple one is [INAUDIBLE]. PROFESSOR: The correlation, then, that I would put here. So they are three different entities. So throughout the course, we have been thinking about systems that are described by some kind of an equilibrium probability distribution. So what we did not discuss is how the system comes to that equilibrium. So we're going to now very briefly talk about dynamics, and the specific type of dynamics that is common to condensed matter systems at finite temperature, which I will call precipative dynamics. And the prototype of this is a Brownian particle that I will briefly review for you. So what you have is that you have a particle that is within some kind of a solvent, and this particle is moving around. So you would say, let's for simplicity actually focus on the one direction, x. And you would say that the mass of the particle times its acceleration is equal to the forces that it's experiencing. The forces-- well, if you are moving in a fluid, you are going to be subject to some kind of a dissipative force which is typically portional to your velocity. If you, for example, solve for the hydrodynamic of a sphere in a fluid, you find that mu is related to viscosity inversely to the size of the particle, et cetera. But that behavior is generic. You're not going be thinking about that. Now suppose that additionally I put some kind of an optical trap, or something that tries to localize this potential. So then there would be an additional force v 2, the derivative of the potential with respect to x. And then we are talking about Brownian particles. Brownian particles are constantly jiggling. So there is also a random force that is a function of time. Now we are going to be interested in the dynamics that is very much controlled by the dissipation term. And acceleration we can forget. And if we are in that limit, we can write the equation as mu-- I can sort of rearrange it slightly as-- actually, let me change location to this. So that the eventual velocity x dot is going to be proportional to the external force. mu the coefficient that is the mobility. So mu essentially relates the force to the velocity. Of course, this is the average force. And there is a fluctuating part, so essentially, I call mu times this to be the a times function of t. Now, if I didn't have this external force, the fluctuations of the particles would be diffusive. And you can convince yourself that you can get the diffusive result provided that you relate the correlations of this force that fluctuating and have 0 average, the diffusion constant d of the particle in the medium through delta of t minus t. So if their track was not there, you solve of this equation without the track and find that the prohibitive distribution for x grows as a Gaussian whose width grows with time, as d t. d therefore must be the diffusion constant. Now, in the presence of the potential, this particle will start to fluctuate. Eventually if you wait long enough, there is a probability that it will be here, a probability to be somewhere else. So at long enough times, there's a probability p of x to find the particle. And you expect that t of x will be proportional to exponential of minus v of x divided by whatever the temperature is. And you can show that in order to have this occur, you need to relate mu and d through the so called Einstein relation. So this is a brief review of Brownian particles. Yes? AUDIENCE: The average and time correlation of eta can be found by saying the potential is 0, right? PROFESSOR: Mm hmm. AUDIENCE: Those will still be true even if the potential is not 0, right? PROFESSOR: Yes. So I just wanted to have an idea of where this d comes from. But more specifically, this is the important thing. That if at very long times you want to have a probability distribution coming from this equation, that has the Boltzmann form with k t, the coefficients of mu and the noise, you have to relate through the so called Einstein relation. And once you do that, this result is true no matter how complicated this v of x is. So in general, for a complicated v of x, you won't be able to solve this equation analytically. You can only do it numerically. Yet you are guaranteed that this equation with this noise correlator will have asymptotically a probability distribution of [INAUDIBLE]. The problem that we have been looking at all along is something different. Let's say you have, let's say, a piece of magnet or some other system that we characterize, let's say, by something m of x. Again, you can do it for vector, but for simplicity, let's do it for the scalar case. So we know, or we have stated, that subject to the symmetries of the system, I know the probability. For some configuration of this field is governed by a form, let's say, that has Landau Ginzburg character. So that has been our starting point. We have said that I have a prohibitive distribution that is of this form. So that statement is kind of like this statement. But the way that I came to that statement was to say that there was a degree of freedom x, the position of the particle, that was fluctuating subject to forces and external variables from the particles of the fluid, that was given by this so called Langevin equation. So I had a time dependent prescription that eventually went to the Boltzmann way that I wanted. Here I have started with the final Boltzmann weight. And the question is, can I think about a dynamics for a field that will eventually give this state. So there are lots and lots of different dynamics that I can impose. But I want to look at the dynamics that is closest to the Brownian particle that I wrote, and that's where this word dissipative comes. So among the universe of all possible dynamics, I'm going to look at one that has a linear time derivative for the field n. So this is the analog of the x dot. And so I write that it is equal to some coefficient, that with their minds, the ease with which that particle-- well the field of that location x changes as a function of the forces that is exerted on it. I assume that mu is the same across my system. So here I'm already assuming there's no x dependence. This system is uniform. And then there was a d v by d x. So v was ultimately the thing that was appearing in the Boltzmann weight. So clearly the analog of the v that I have is this Landau Ginzburg. So I will do a function of the derivative of this quantity that I will call beta h, with respect to m of x. Again, over there, I had one variable, x. You can imagine that I could have had a system where two particles, x 1 and x 2, also have an interaction among them. Then the equation that I would have had over there would be the force that is acting on particle 1, by taking the total potential-- which is the external potential plus the potential that comes from the inter particle interaction. So I would have to take a derivative of the net potential energy v, divided with respect to either x 1 or x 2 to calculate the force on the first one or the second one. So here for a particular configuration m of the field across the system, if I'm interested in the dynamics of this position x, I have to take this total internal potential energy, and take the derivative with respect to the variable that is sitting on that side. So that's why this is a functional derivative of this end. And then I will have to put a noise, eta. Well, again, if I had multiple particles, I would subject each one of them to an independent noise. So at each location, I have an independent noise. So the noise is a function of time, but it is also wearing across my system. So if I take that form and do the functional derivative-- so if I take the derivative with respect to m of x, I have to take the derivative of these objects. So I will have minus derivative of t m squared is t m. The u m to the 4th is 4 u m q, and so forth. Once I have gotten rid of these terms, then I would have terms that depend on the gradient. So I would have minus the derivative of this object with respect to the gradient. So here I would get k gradient of m. And then the next term would be Laplacian derivative, with respect to Laplacian. So I would put l Laplacian of m, and so forth with the methodologies of taking function derivatives. And then I have the noise. So this leads to an equation which is called a time dependent, Landau Ginzberg. Because we started with the Landau Ginzberg weight, and this equation, as we see shortly, subject to similar restrictions as we had before, will give us, eventually, this probability distribution. This is a difficult equation in the same sense that the original Landau Ginzberg is difficult to look at correlations, et cetera. This is a nonlinear equation, causes various difficulties, and we need approaches to be able to deal with the difficult, non linealities. So what we did for the Landau Ginzburg was to initially get insights and simplify the system by focusing on the linearized or Gaussian version. So let's look at the version of this equation that is linearized. And when it is linearized, what I have on the left hand side is d m by d t. What I have on the right hand side is mu. I have t m. I got rid of the non linear term, so the next term that I will have will be k Laplacian of m, and then will be minus l 4th derivative of m, and so forth. And then there will be a noise [INAUDIBLE]. One thing that I can immediately do is to go to fullier transform. So m of x goes to m theta of q. And if I do that, but not fullier transform in time, I will get that the time derivative of m tilde of q is essentially what I have here. And I forgot the minus that I have here. So this minus is important. And then this becomes negative, this becomes positive. So that when I go fullier transform, what I will get is minus t plus k q squared plus l q to the 4th, and so forth. And tilde of q with this mu out front. And then the fullier transform of what my noise is. First thing to note is that even in the absence of noise, there is a set of relaxation times. That is, for eta it was to 0. Or in general, I would have n tilde of q and p. I can solve this equation kind of simply. It is the m by d t is some constant times n-- let's call it gama of q-- which has dimensions of 1 over time. So I can call that 1 over tau of q. If I didn't have noise, if I started with some original value at time 0, it is going to decay exponentially with this characteristic time. And once I have noise, it is actually easy to convince yourself that the answer is 0 2 t d t prime e to the minus this gamma of q or inversify. Tau of q times eta of q i t prime. So you see that you have a hierarchy of relaxation times, ta of q, which are 1 over u t plus k q squared, and so forth, which scale in two limits. Either the wavelength lambda, which is the inverse of q is much larger than the correlation length-- and the correlation length of this model you have seen to be the square root of t over k, square root of k over t-- or the other limit, where lambda is much less than c. In this limit, where we are looking at modes that are much shorter than the correlation length, this term is dominant. This becomes 1 over mu k q squared. In the other limit, it goes to a constant 1 over u t. So this linear equation has a whole bunch of modes that can be characterized by their wavelength or their wave number. You find that the short wavelength modes have this characteristic, time, that becomes longer and longer as the wavelength increases. So if you make the wavelength twice as large, and you want to relax a system that is linearly twice as large, this says that it will take 4 times longer. Because the answer goes like lambda squared. Whereas eventually you reach the size of the correlation length. Once you are beyond the size of the correlation length, it doesn't matter. It's the same time. But the interesting thing, of course, to us is that there are phase transitions that are continuous. And close to that phase transition, the correlation length goes to infinity. Which means that the relaxation time also will go to infinity. So according to this theory, there's a particular divergence as 1 over t minus t c. But it will be modified, and as I will discuss next time, this is only-- even within to dissipative class-- one type of dynamics that you can have. And there are additional dynamics, and this system characterizes criticality as single universality class in statics. There are many dynamic universality classes that correspond to this same static. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 11_Perturbative_Renormalization_Group_Part_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So today hopefully we will finally calculate some exponents. We've been writing, again and again, how to calculate partition functions for systems, such as a magnet, by integrating over configurations of all shapes of a statistical field. And we have given weights to these configurations that are constructed as some kind of a function [? l ?] of these configurations. And the idea is that presumably, if I could do this, then I could figure out the singularities that are possible at a place where, for example, I go from an unmagnetized to a magnetized case. Now, one of the first things that we noted was that in general, I can't solve the types of Hamiltonians that I would like. And maybe what I should do is to break it into two parts, a part that I will treat perturbatively, and a part-- sorry, a [INAUDIBLE] part that I can calculate exactly, and a contribution that I can then treat as a perturbation. Now, we saw that there were difficulties if I attempted straightforward perturbation type of calculations. And what we did was to replace this with some kind of a renormalization group approach. The idea was something like this, that these statistical field theories that we write down have been obtained by averaging true microscopic degrees of freedom over some characteristic landscape. So this field m certainly does not have fluctuations that are very short wavelength. And, for example, if we were to describe things in the perspective of Fourier components, presumably the variables that I would have would have some maximum, q, that is related to the inverse of the wavelength. So there is some lambda. And if I were to in fact Fourier transform my modes in terms of q, then these modes will be defined [INAUDIBLE] this space. And for example, my beta is zero. In the language of Fourier modes would be the part that I can do exactly, which is the part that is quadratic and Gaussian. And the q vectors would be between the interval 0 to whatever this lambda is. And the kind of thing that I can do exactly are things that are quadratic. So I would have m of q squared. And then some expansion, [INAUDIBLE] of q, that has a constant plus tq squared and potentially higher order [INAUDIBLE]. So this is the Gaussian theory that I can calculate. Problem with this Gaussian theory is that it only is meaningful for t positive. And in order to go to the space where t is negative, I have to include higher order terms in the magnetization, and those are non-perturbative. And for example, if I go back to the description in real space, I was writing something like um to the fourth plus higher order terms for the expansion of this u. When we attempted to do straightforward perturbative calculations, we encountered some singularities. And the perturbation didn't quite make sense. So we decided to combine that with the idea of renormalization group. The idea there was to basically, rather than integrate over all modes, to subdivide the modes into two classes, the modes that are long wavelength and I would like to keep, and I'll call that m tilde, and the modes that are sitting out here that I'm not interested in because they give rise to no singularities that I would like to get rid of. So my integration over all set of configurations is really an integration over both this m tilde and the sigma. And if I regard m tilde as a span over wave numbers to either be m tilde or sigma, I can basically write this as m tilde plus sigma, and this is m tilde plus sigma also. So this is just a rewriting of the partition function where I have just changed the names of the modes. Now, the first step in the renormalization group is the coarse graining, which is to average out fluctuations that have scale between a, and let's say this in Fourier space is lambda over b, in real space would be b times whatever your original base scale was for average. So getting rid of those modes would amount to basically changing the scale over which you're averaging by a factor of b. Once I do that, if I can do the integration, what I will be left if I integrate over sigma is just an integral over m tilde. OK? Now, what would be the form of this integration? The result. Well, first of all, if I take the Gaussian and separate it out between zero to lambda over b and lambda over b2 lambda and integrate over the modes between lambda over b and lambda, just as if I had the Gaussian, then I would get essentially the contribution of the logarithm of the determinants of all of these Gaussian types of variances. So there will be a contribution to the free energy that is effectively independent of m tilde will depend on the rescaling factor that you are looking at. But it's a constant. It doesn't depend on the different configurations of the field m tilde. The other part of the Gaussian-- so essentially, I wrote the Gaussian as 0 to lambda over b and lambda over b2 lambda. The part that is 0 over lambda b will simply remain, so I will have [? beta 8 ?] 0. That now depends only on these m tildes. Well, what do I have to do with this term? So it's an integration over sigma that has to be performed. I did the integration by taking out this them as if it was a Gaussian. So effectively, the result of the remaining integration is the average of e to the minus u. And when I take it to the log, I will get plus log of e to the minus u, which is a function of m tilde and sigma, where I have integrated out the modes that are out here, the sigmas. So it's only a function of m tilde. They have been integrated out using a Gaussian weight, such as the one that I have over here. So that's formally exact. But it hasn't given me any insights because I don't know what that entity is. What I can do with that entity is to make an expansion powers of u. So I will have a minus the average of u. And then the next term would be the variance of u. So I will have the average of u squared, average of u squared, and then higher order terms. So basically, this term can be expanded as a power series as I have indicated. And again, just to make sure, these averages are performed with this Gaussian weight. And in particular, we've seen that when we have a Gaussian weight, the different components and the different q values are independent of each other. So I get here a delta alpha beta, I get a delta of q plus q prime, and I will get t plus k q squared and potentially higher order powers of q that will appear in this series. Now, we kind of started developing a diagrammatic perspective on all of this. Something is m to the fourth, since it was the dot product of two factors of m squared, we demonstrate it as a graph such as this. And we also introduced a convention where there solid lines would correspond to m tilde. Let's say wavy lines would correspond to sigma. And essentially, what I have to do is to write this object according to this, where each factor of m is replaced by two factors, which is the sum of this entity and that entity diagrammatically. So that's two to the four, or 16 different possibilities that I could have once I expand this. And what was the answer that we got for the first term in the series? So if I take u0 average, the kind of diagrams that I can get is essentially keeping this entity as it is. So essentially, I will get the original potential that I have. Rather than m to the fourth, I will simply have the equivalent m tilde to the fourth. So basically, diagrammatically this would correspond to this entity. There was a whole bunch of things that cancel out to zero in the diagram that I had with only one leg when I took the average because I had a leg by itself, which would make it an odd average, would give me zero. So I didn't have to put any of these. And then I had diagrams where I had two of the lines replaced by wavy lines. And so then I would get a contribution to u. There was a factor of 4n plus-- sorry, 2n plus 4. That came from diagrams in which I took two of the legs that were together, and the other two I made wavy, and I joined them together. And essentially, I had the choice of picking this pair of legs or that pair of legs, so that gave me a factor of two. And something that we will see again and again, whenever we have a loop that goes around by itself, it corresponds to something like a delta alpha alpha, which, when you sum over alpha, will give you a factor of n. The other contribution, the four, came from diagrams in which I had two wavy lines on different branches. And since they came originally from different branches, there wasn't a repeated helix to sum and give me a factor of n. So I just have as a factor of two from choice of one branch or the other branch. So that was a factor of four. And then associated with each one of these diagrams, there was then an integration over the index, k, that characterized these m tildes, in fact, the sigmas that had been integrated over. So I would have an integral from lambda over b2 lambda. Let's call that dbk 2 pi to the d 1 over the variance, which is what I have here, t plus k, k squared, [INAUDIBLE]. There are diagrams then with three wavy lines, which again gave me zero because the average of three-- average of an odd number with a Gaussian weight is zero. And then there were a bunch of things that would correspond to all legs being wavy. There was something like this, and there was something like this. And basically, I didn't really have to calculate them. So I just wrote the answer to those things as being a contribution to the free energy and overall constant, such as the constant that I have over here, but not at the next order in u, but independent of the configurations. So this was straightforward perturbation. I forgot something very important here, which is that this entire coefficient was also coupled to these solid lines, whose meaning is that it is an integral over q 2 pi to the d m tilde of q squared, where the waves and numbers that are sitting on these solid lines naturally run from 0 to lambda. So we can see that if I were to add this to what I have above, I see that my z has now been written as an integral over these modes that I'm keeping of a new weight that I will call beta h tilde, depending on these m tildes. Where this beta h tilde is, first of all, these terms that are proportional. That's with a v here also. To contributions of the free energy coming from the modes that I have integrated out, either at the zero order or at the first order so far. I have the u, exactly the same u as I had before, but now acting on m tilde. So four factors of m tilde. The only thing that happened is that the Gaussian contribution now running from 0 to lambda over b, that is proportional to m tilde of q squared, is now still a series, such as the one that I had before, where the coefficient that was a constant has changed. All the other terms in the series, the term that is proportional to q squared, q to the fourth, et cetera, are left exactly as before. So what happened is that this beta h tilde pretty much looks like the beta h that I started with, with the only difference being that t tilde is t plus essentially what I have over there, 4u n plus 2 integral lambda over b2 lambda ddk 2 pi to the d, 1 over t plus k, k squared, and so forth. But quite importantly, the parameter that I would associate with coefficient of q squared is left unchanged. If I had a coefficient of q to the fourth, its coefficient would be unchanged. And I have a coefficient for u. Its coefficient is unchanged also. So the only thing that happened is that the parameter that corresponded to t got modified. And you actually should recognize this as the inverse susceptibility, if I were to integrate all the way from 0 to lambda. And when we did that, this contribution was singular. And that's why straightforward perturbation theory didn't make sense. But now we are not integrating to 0, which would have given the singularity. We are just integrating over the shell that I have indicated outside. So this step was the first step of renormalization group that we call coarse graining. But rg had two other steps. That was rescale. Basically, the theory that I have has a cut-off that is lambda over b. So it looks grainier in real space. So what I can do in real space is to shrink it. In Fourier space, I have to blow up my momenta. So essentially, whenever I see q, I replace it with the inverse q prime so that q prime, that is bq, runs from zero to lambda, restoring the cut-off that I had originally. And the next step was to renormalize, which amounted to replacing the field m tilde with a new field m prime after multiplying or rescaling by a factor of z to be determined. Now, this amounts to simple dimensional analysis. So I go back into my equation, and whenever I see q, I replace it with b inverse q prime. So from the integration, I get a factor of b to the minus d, multiplying t tilde, replace m tilde by z times m prime. So that's two factors of z. So what I get is that t prime is z squared b to the minus d, this t tilde that I have indicated above. Now, k prime is also something that appears in the Gaussian term. So it has a z squared. It came from two factors of m. But because it had an additional factor of q squared rather than b to the minus d, it is b to the minus d minus 2. And I can do the same analysis for higher order terms going with higher powers of q in the expansion that appears in the Gaussian. But then we get to the non-linear terms, and the first linearity that we have kept is this u prime. And what we see is it goes with four factors of m. So there will be z to the fourth. If I write things in Fourier space, m to the fourth in real space in Fourier space would involve m of g1, m of q2, m of q3. And the fourth m, that is minus q1, minus q2, minus q3. But there will be three integrations over q, which gives me three factors of b to the minus 3. So these are pretty much exactly what we had already seen for the Gaussian model-- forgot the k-- except that we replaced this t that was appearing for the Gaussian model with t tilde which is what I have up here. Now, you have to choose z such that the theory looks as much as possible as the original way that I had. And as I mentioned, our anchoring point would be the Gaussian. So for the Gaussian model, we saw that the appropriate choice, so that ultimately we were left with the right number of relevant directions was to set this combination to 1, which means that I have to choose z to be v to the power of 1 plus d over 2. Now, once I choose that factor for z, everything else becomes determined. This clearly has two factors of b with respect to the original. So this becomes b squared. This you have to do a little bit of work. Z to the fourth would be b to the 4 plus 2d, then minus 3d gives me b to the 4 minus d. And I can similarly determine what the dimensions would be for additional terms that appear in the Gaussian, as well as additional nonlinearities that could appear here. All of them, by this analysis, I can assign some power of b. So this completes the rg in the sense that at least at this order in perturbation theory, I started with my original theory, and I see how the parameters of the new theory are obtained if I were to rescale and renormalize by this factor of b. Now, we did one thing else, which is quite common, which is rather than choosing factors like b equals to 2 or 3, making b to be infinitesimally small, at least on the picture that I have over there. What I'm doing is I'm making this b very close to 1, which means that effectively I'm putting the modes that I'm getting rid of in a tiny shell around lambda. So I have chosen b to be slightly larger than 1 by an amount delta l. And I expect that all of the parameters will also change very slightly, such that this t prime evaluated at scale v would be what I had originally, plus something that vanishes as delta l goes to zero and presumably is linear in delta l dt by dl. And similarly, I can do the same thing for u and all the other parameters of the theory. Once I do that, these jumps from one parameter to another parameter can be translated into flows. And, for example, dt by dl gets a contribution from writing b squared as 1 plus 2 delta l. That is proportional to 2 times t. And then there's another contribution that is order of delta l. Clearly, if b equals to 1, this integral would vanish. So if b is very close to 1, this integral is off the order of delta l. And what it is is just evaluating the integrand when k equals to lambda at the shell, and then multiplying by the volume of that shell, which is the surface area times the thickness. So I will get from here a contribution order of delta l. I have divided through by delta l, which is 4u m plus 2 1 over t plus k lambda squared [INAUDIBLE] integrand. And then I have the surface area divided by 2 pi to the d that we have always called kd. And then lambda to the d is the product of lambda to the d minus 1 and lambda delta l, which comes from the thickness. The delta l I have taken out. And this whole thing is the order of u contribution. And then they had a term that is du by dl, which is 4 minus d times u. So this is the result of doing this perturbative rg to the lowest order in this parameter u. Now, these things are really the important parameters. There will be other parameters that I have not specifically written down. And next lecture, we will deal with all of them. But let's focus on these two. So I have one parameter, which is t, the other parameter, which is u. But u can only be positive for the theory to make sense. I said that originally the Gaussian theory only makes sense if t is positive because once t becomes negative, then the weight gets shifted to large values of m. It is unphysical. So for physicalness of the Gaussian theory, I need to confine myself to the t positive plane. Now that I have u, I can have t that is negative and um to the fourth, as long as u positive, will make the weight well behaved. So this entire plane is now accessible. Within this plane, there is a point which corresponds to a fixed point, a point that if I'm at that location, then the parameters no longer change. Clearly, if u does not change, u at the fixed point should be 0. If u at the fixed point is 0 and t does not change, t at the fixed point is 0. So this is the fixed point. Since I'm looking at a two-dimensional projection, there will be two eigendirections associated with moving away from this fixed point. If I stick with the axis where u is 0, you can see that u will stay 0. But then dt by dl is 2t. So if I'm on the axis that u equals to 0, I will stay on this axis. So that's one of my eigendirections. And along this eigendirection, I will be flowing out with an eigenvalue of 2. Now, in general however, let's say if I go to t equals to 0, you can see that if t is 0, but u positive dt by dl is positive. So basically, the u direction you will be going if you start on the t equals to 0 axis, you will generate a positive t. And the typical flows that you would have would be in this direction. Actually, I should draw it with a different color. So quite generically, the flows are like this. But there is a direction along which the flow is preserved. So there is a straight line. This straight line you can calculate by setting dt by dl divided by du by dl to be the ratio of t over u. You can very easily find that it corresponds to a line of t being proportional to u with a negative slope. And the eigenvalue along that direction is determined by 4 minus d. So that the picture that I have actually drawn for you here corresponds to dimensions greater than four. In dimensions greater than four along this other direction, you will be flowing towards the fixed point. And in general, the flows look something like this. So what does that mean? Again, the whole thing that we wrote down was supposed to describe something like a magnet at some temperature. So when I fix my temperature of the magnet, I presumably reside at some particular point on this diagram. Let's say in the phase that is up here, eventually I can see that I go to large t and u goes to 0. So the eventual weight is very much like a Gaussian, e to the tm squared over 2. So this is essentially independent patches of the system randomly pointing to different directions. If I change my system to have a lower temperature, I will be looking at a point such as this. As I lower the temperature, I will be looking at some other point presumably. But all of these points that correspond to lowering temperatures, if I also now look at increasing land scale, will flow up here. Presumably, if I go below tc, I will be flowing in the other direction, where t is negative, and then the u is needed for stability, which means that I have to spontaneously choose a direction in which I order things. So the benefit of doing this renormalization and this study was that in the absence of u, I could not achieve the low temperature part of the system. With the addition of u, I can describe both sides, and I can see on the rescaling which set of points go to what is the analog of the high temperature, which set of points go to what is the analog of low temperature. And the point that corresponds to the transition between the two is on the basing of attraction of the Gaussian fixed point that is asymptotically, the theory would be described by just gradient of m squared. But this picture does not work if I go too d that is less than four. And d less than four, I can again draw u. I can draw t. And I will again find the fixed point at 0, 0. I will again find an eigendirection, at u equals to 0, which pushes things out along the u equals to 0 axis. Going from d of above four to d of below four does not really materially change the location of this other eigendirection by much. It pretty much stays where it was. The thing that it does change is the eigenvalue. So basically, here I will find that the flow is in this direction. And if I were to generalize the picture that I have, I would get things that would be going like this or going like this. Once again, there are a set of trajectories that go on one side, a set of trajectories that go on the other side. And presumably, by changing temperature, I will cross from one set of trajectories to the other set of trajectories. But the thing is that the point that corresponds to hitting the basin that separates the two sets of trajectories, I don't know what it corresponds to. Here, for d greater than 4, it went to the Gaussian fixed point. Here currently, I don't know where it is going. So I have no understanding at this level of what the scale invariant properties are that describe magnets in three dimensions at their critical temperature. Now, the thing is that the resolution and everything that we need comes from staring more at this expansion that we had. We can see that this is an alternating theory because I started with e to the minus u. And so the next term is likely to have the opposite sign to the first term. So I anticipate that at the end of doing the calculation, if I go to the next order, there will be a term here that is minus vu squared. Actually, there will be a contribution to dt by dl also that is minus, let's say, au squared. So I expect that if I were to do things at the next order, and we will do that in about 15 minutes, I will get these kinds of terms. Once I have that kind of term, you can see that I anticipate then a fixed point occurring at the location u star, which is 4 minus d divided by b. And then, by looking in the vicinity of this fixed point, I should be able to determine everything that I need about the phase transition. But then you can ask, is this a legitimate thing to do? I have to make sure I do things self consistently. I did a perturbation theory assuming that u is a small quantity, so that I can organize things in power of u, u squared, u cubed. But what does it mean that I have control over powers of u? Once I have landed at this fixed point, where at the fixed point, u has a value that is fixed and determined. It is this 4 minus d over b. So in order for the series to make sense and be under control, I need this u star to be under control as a small parameter. So what knob do I have to ensure that this u star is a small parameter? Turns out that practically the only knob that I have is that this 4 minus d should be small. So I can only make this into a systematic theory by making it into an expansion in a small quantity, which is 4 minus d. Let's call that epsilon. And now we can hopefully, at the end of the day, keep track of appropriate powers of epsilon. So the Gaussian theory describes properly the behavior at four dimensions. At 4 minus epsilon dimensions, I can figure out where this fixed point is and calculate things correctly. All right? So that means that I need to do this calculation of the variance of u. So what I will do here is to draw a diagram to help me do that. So let's do something like this. OK, let's do something like this. Six, seven rows and seven columns. The first row is to just tell you what we are going to plot. So basically, I need a u squared average, which means that I need to have two factors of u. Each one of them depends on m tilde and sigma. And so I will indicate the two sets. Actually already, we saw when we were doing the case of the first order calculation, how to decompose this object that has four lines. And we said, well, the first thing that I can do is to just use the m's. The next thing that I can do is I could replace one of the m's with a sigma. And there was a choice of four ways to do so. Or I could choose to replace two of the m's with wavy lines. And question was, the right branch or the left branch? So there's two of these. I could put the wavy lines on two different branches. And there was four ways to do this one. I could have three wavy lines, and the one solid line could then be in one of four positions. Or I had all wavy lines, so there is this. So that's one of my factors of u on the vertical for this table. On the horizontal, I will have the same thing. I will have one of these. I will have four of these. I will have two of these. I will have four of these. I will have four of these, and one which is all wavy lines. Now I have to put two of these together and then do the average. Now clearly, if I put two of these together, there's no average to be done. I will get something that is order of m to the fourth. But remember that I'm calculating the variance. So that would subtract from the average squared of the same quantity. It's a disconnected piece. And I have stated that anything that is disconnected will not contribute. And in particular, there is no way to join this to anything. So everything that we log here in this row would correspond to no contribution once I have subtracted out the average of u squared. And there is symmetry in this table. So the corresponding column is also all things that are disconnected entities. All right. Now let's see the next one. I have a wavy line here, a sigma here, and a sigma here. I can potentially join them together into a diagram that looks something like this. So I will have this, this. I have a leg here. I will have this line gets joined to that line. And then I have this, this, this. Now, what is that beast? It is something that has six factors of m tilde. So this is something that is order of m tilde to the sixth power. So the point is that we started here saying that I should put every term that is consistent with symmetry. I just focused on the first fourth order term, but I see this is one of the things that happens under renormalization group. Everything that is consistent with symmetry, even if you didn't put it there at the beginning, is likely to appear. So this term appeared at this order. You have to think of ultimately whether that's something to worry about or not. I will deal with that next time. It is not something to worry about. But let's forget about that for the time being. Next term, I have one wavy line here and two wavy lines there. So it's something that is sigma cubed. Against the Gaussian weight, it gives me 0. So because of it being an odd term, I will get a 0 here. What color [INAUDIBLE] a 0 here. Somehow I need this row to be larger in connection with future needs. Next one is also something that involves three factors of sigma, so it is 0 by symmetry. And again, since this is a diagram that has symmetry along the diagonal, there will be 0's over here. Next diagram. I can somehow join things together and create something that has four legs. It will look something like this. I will have this leg. This leg can be joined, let's say, with this leg, giving me something out here. And these two wavy lines can be joined together. That's a possibility. You say, OK. This is a diagram that corresponds to four factors of m tilde. So that should contribute over here. Actually, the answer is that diagram is 0. The reason for that is the following. Let's look at this vortex over here. It describes four momenta that have come together. And the sum of the four has to be 0. Same thing holds here. The sum of these four has to be 0. Now, if we look at this diagram, once I have joined these two together, I have ensured that the sum of these two is 0. The sum of all of four is 0. The sum of these two is 0. So the sum of these two should be 0 too. But that's not allowed. Because one of them is outside this shell, and the other is inside the shell. So just kinematically, there's no choice of momenta that I could make that would give a contribution to this. So this is 0 because of what I will write as momentum type of conservation. Again, because of that, I will have here as 0 momentum down here. The next diagram has one sigma from here and four sigmas. So that's an odd number of sigmas. So this will be 0 too, just because of up-down symmetry in m tilde. So we are gradually getting rid of places in this table. But the next one is actually important. I can take these two and join them to those two and generate a diagram that looks like this. So I have these two hands. These two hands get joined to the corresponding two hands. And I have a diagram such as this. Yes. AUDIENCE: [INAUDIBLE] Is there another way for them to join also? PROFESSOR: Yes, there is another way which suffers exactly the same problem. Ultimately, because you see the problem is here. I will have to join two of them together, and the other two will be incompatible. Now, just to sort of give you ultimately an idea, associated with this diagram there will be a numerical factor of 2 times 2 from the horizontal times the vertical choices. But then there's another factor of 2 because this diagram has two hands. The other diagram has two hands. They can either join like this, or they can join like this. So there's two possibilities for the crossing. If you kind of look ahead to the indices that carry around, these two are part of the same branch. They carry the same index. These two would be carrying the same index, let's say j. These two would be carrying the same index, j prime. So when I do the sum, I will have a sum over j and j prime of delta j, j prime. I will have a sum over j delta jj, which will give me a factor of n. Any time you see a closed loop, you generate a factor of n, just like we did over here. It generated a factor of n. OK, so there's that. The next diagram looks similar, but does not have the factor of n. I have from over there the two hands that I have to join here. I have to put my hands across, and I will get something like this. So it's a slightly different-looking diagram. The numerical factor that goes with that is 2 times 4 times 2. There is no factor of n. Now, again, because of symmetry, there's a corresponding entity that we have over here. If I just rotate that, I will essentially have the same diagram. Opposite way, I have essentially that. The two hands reach across to these and give me something that is like this. To that, sorry. They join to that one. And the corresponding thing here looks like this. Numerical factors, this would be 2 times 4 times 2. It is exactly the same as this. This would be 4 times 4 times 2. At the end of the day, I will convince you that this block of four diagrams is really the only thing that we need to compute. But let's go ahead and see what else we have. If I take this thing that has two hands, try to join this thing that has three hands, I will get, of course, 0, based on symmetry. If I take this term with two hands, join this thing with four hands, I will generate a bunch of diagrams, including, for example, this one. I can do this. There are other diagrams also. So these are ultimately diagrams with two hands left over. So they will be contributions to m tilde squared. And they will indeed give me modifications of this term over here. But we don't need to calculate them. Why? Because we want to do things consistently to order of epsilon. In the second equation, we start already with epsilon u. So this term was order of epsilon squared. Since u star will be order of epsilon, this term will be epsilon squared. The two terms I have to evaluate, they are both of the same order. But in the first equation, I already have a contribution that is order of epsilon. If I'm calculating things consistently to lowest order, I don't need to calculate this explicitly. I would need to calculate it explicitly if I wanted to calculate things to order of epsilon squared, which I'm not about to do. But to our order, this diagram exists, but we don't need to evaluate. Again, going because of the symmetry along the diagonal of the diagram, we have something here that is order of m tilde squared that we don't evaluate. OK, let's go further. Over here, we have two hands, three hands. By symmetry, it will be zero. Over here, we have two hands, four hands. I will get a whole bunch of other things that are order of m tilde squared. So there are other terms that are of this same form that would modify the factor of a, which I don't need to explicitly evaluate. All right. What do we have left? There is a diagram here that is interesting because it also gives me a contribution that is order of m tilde squared, which we may come back to at some point. But for the time being, it's another thing that gives us a contribution to a. Here, what do we have? We have three hands, four hands, zero by symmetry, zero by symmetry. Down here, we have no solid hands. So we will get a whole bunch of diagrams, such as this one, for example, other things, which collectively will give a second order correction to the free energy. It's another constant that we don't need to evaluate. So let's pick one of these diagrams, this one in particular, and explicitly see what that is. It came out of putting two factors of u together. Let's be explicit. Let's call the momenta here q1, q2, and k1, k2. And the other u here came from before I joined them, there was a q3, q4. There was a k1 prime, k2 prime. So let's say the first u-- this is a diagram that will contribute at order of u squared. Second order terms in the series all come with a factor of one half. It is u to the n divided by n factorial. So this would be explicitly u squared over 2. For the choice of the left diagram, we said there were two possibilities. For the choice of right diagram, there were two branches, one of which I could have taken. In joining the two hands together, I had a degeneracy of two, so I have all of that. A particular one of these is an integral over q1, q2. And from here, I would have integrations over q3, q4. These are all integrations that are for variables that are in the inner shell. So this is lambda to lambda over b. I have integrations from lambda over v2 lambda for the variables k1, k2, k1 prime, k2 prime. And if I explicitly decided to write all four momenta associated with a particular index, I have to explicitly include the delta functions that say the sum of the momenta has to add up to 0. Now, what I did was to drawing these two sigmas together. So I calculated one of those Gaussian averages that I have over there. Actually, before I do that, I note that these pairs are dotted together. So I have m tilde of q1 dotted with m tilde of q3, m tilde of q4 dotted with m tilde of q1, q2, q3, q4. These two are dot products. These two are dot products. Here, I joined the two sigmas together. The expectation value gives me 2 pi to the d at delta function k1 plus k1 prime divided by t plus k, k1 squared, and so forth. And the delta function, if I call these indices j, j, j prime, j prime, I will have a delta j j prime. And from the lower two that I have connected together, I have 2 pi to the d delta function k2 plus k2 prime. Another delta j j prime, t plus k, k2 squared, and so forth. Now I can do the integrations. But first of all, numerical factors, I will get 4u squared. As I told you, delta j j prime, delta j j prime will give me delta jj. Sum over j, I will get a factor of n. That's the n that I anticipated and put over there. I have the integrations 0 to lambda over v, ddq1, ddq2, ddq3, ddq4, 2 pi to the 4d. And then this m tilde q1 m tilde q2 m tilde q3 dotted with m tilde q4. Now, note the following. If I do the integration over k1 prime, k1 prime is set to minus k1. If I do the integration over k2 prime, k2 prime is set to minus k2. If I now do the integration over k2, k2 is set to minus q1 minus q2 minus k1, which if I insert over here, will give me a delta function that simply says that the four external q's have to add up to 0. So there is one integration that is left, which is over k1. So I have to do the integral lambda over v2 lambda, dd of k1 2 pi to the d. So basically, there's k1 running across the upper line gives me a factor of 1 over t plus k, k1 squared, and so forth. And then there is what is running along the bottom line, which is k2 squared. And k2 squared is the same thing as q1 plus q2 plus k1, the whole thing squared. So the outcome of doing the averages that appear in this integral is to generate a term that is proportional to m to the fourth, which is exactly what we had, with one twist. The twist is that the coefficient that is appearing here actually depends on q1 and q2. Of course, q1 and q2 being inner momenta are much smaller than k1, which is one of the shared momenta. So in principle, I can expand this. I can expand this as ddk 2 pi to the d lambda over b2 lambda, 1 over t plus k, k squared. I've renamed k1 to k squared to lowest order in q, this is 0. And then I can expand this thing as 1 plus k q1 plus q2 squared and so forth, divided by t plus k k squared raised to the minus 1 power. Point is that if I set the q's to 0, I have obtained a constant addition to the coefficient of my m to the fourth. But I see that further down, I have generated also terms that depend on q. What kind of terms could these be? If I go back to real space, these are terms that are after order of m to the fourth, which was, if you remember, m squared m squared. But carry additional gradients with them. So, for example, it could be something like this. It has two factors of q, various factors of n. Or it could be something like m squared gradient of m squared. The point is that we have again the possibility, when we write our most general term, to introduce lots and lots of non-linearities that I didn't explicitly include. But again, I see, if I forget them at the beginning, the process will generate them for you. So I should have really included these types of terms in the beginning because they will be generated under the RG, and then I can track their evolution of all of the parameters. I started with 0 for this type of parameter. I generated it out of nothing. So I should really go back and put it there. But for the time being, let's again ignore that. And next time, I'll see what happens. So what we find at the end of evaluating all of these diagrams is that this beta h tilde evaluated at the second order, first of all, has a bunch of constants which in principle, now we can calculate to the next order. Then we find that we get terms that are proportional to m tilde squared, the Gaussian. And I can get the terms that I got out of second order and put it here. So I have my original t tilde now evaluated at order of u squared. Because of all those diagrams that I said I have to do. I will get a k tilde q squared and so forth, all of them multiplying m tilde squared. And I see that I generated terms that are of the order of m tilde to the fourth. So I have ddq1, ddq4, 2 pi to the 4d, 2 pi to the d delta function, q1, q4, m tilde q1, m tilde q2, m tilde q3, m tilde q4. And what I have to lowest order is u. And then I have a bunch of terms that are of the form something like this. So they are corrections that are proportional to the integral lambda over b lambda ddk 2 pi to the d, 1 over t plus k, k squared squared. So essentially, I took this part of that diagram. That diagram has a contribution at order of u squared, which is 4n. So if I had written it as u squared over 2, I would have put 8n, the 8 coming from just the multiplication that I have there, 2 times 2 times 2 times n. Now, if you calculate the other three diagrams that I have boxed, you'll find that they give exactly the same form of the contribution, except that the numerical factor for them is different. I will get 16, 1632, adding up together to a factor of 64 here. And then the point is that I will generate additional terms that are, let's say, order of q squared and so forth, Which are the kinds of terms that I had not included. So what we find is that-- question? AUDIENCE: Where did you add the t total? PROFESSOR: OK. So let's maybe write this explicitly. So what would be the coefficient that I have to put over here? I have t at the 0-th order. At order of u, I calculated 4u n plus 2 integral. The point is that when I add up all of those diagrams that I haven't explicitly calculated, I will get a correction here that is order of u squared whose coefficient I will call a. But then this is the 0-th order in the momenta, and then I have to go and add terms that are at the order of q squared and higher order terms. So this was, again, the course graining step of RG, which is the hard part. The rescaling and renormalization are simple. And what they give me at the end of the day are the modifications to dt by dl and du by dl that we expected. dt by dl we already wrote. 0-th order is 2t. First order is a correction. 4u n plus 2 integral, which, when evaluated on the shell, gives me kt lambda to the d t plus k lambda squared and so forth. Now, this a here would involve an integration. Again, this integration I evaluated on the shell. So the answer will be some a that will depend on t, k, and other things, would be a contribution that is order of u squared. I haven't explicitly calculated what this a is. It will depend on all of these other parameters. Now, when I calculate the u by dl, I will get this 4 minus d times u to the lowest order. To the next order, I essentially get this integral. So I have minus 4n minus 4n plus 8 u squared. Evaluate that integral on the shell. kd lambda to the d t plus k lambda squared and so forth squared. And presumably, both of these will have corrections at higher orders, order of u squared, et cetera. So this generalizes the picture that we had over here. Now we can ask, what is the fixed point? In fact, there will be two of them. There is the old Gaussian fixed point at t star u star equals to 0. Clearly, if I said t and u equals to 0, I will stay at 0. So the old fixed point is still there. But now I have a new fixed point, which is called the ON fixed point because it explicitly depends on the symmetry of the order parameter, the number of components, n, as well as dimensionality. It's called the ON fixed point. So setting this to 0, I will find that u star is essentially epsilon divided by whatever I have here. I have 4n plus 8 kd lambda to the d. In the numerator, I would have t star plus k lambda squared squared. And then I can substitute that over here to find what t star is. So t star would be minus 2 n plus 2 kd lambda to the d divided by t star plus k lambda squared, et cetera, times u star, which is what I have the line above, which is t star plus k lambda squared, et cetera, squared, divided by 4n plus 8 kd lambda to the t. Now, over here, this is in principle an implicit equation for t star. But I forgot the epsilon that I have here. But it is epsilon multiplying some function of t star. So clearly, t star is order of epsilon. And I can set t star equal to 0 in all of the calculation, if I'm calculating things consistently to epsilon. You can see that this kd lambda to the d cancels that. One of these factors cancels what I have over here. At the end of the day, I will get minus n plus 2 divided by n plus 8 k lambda squared epsilon. And similarly, over here I can get rid of t star because it's already order of epsilon, and I have epsilon out here. So the answer is going to be k squared lambda to the power of 4 minus d divided by 4n plus 8 kd lambda to the d. Presumably, both of these plus order of epsilon squared. So you can see that, as anticipated, there's a fixed point at negative 2 t star and some particular u star. There was a question. [INAUDIBLE] AUDIENCE: [INAUDIBLE] PROFESSOR: What is unnecessary? AUDIENCE: [INAUDIBLE] AUDIENCE: You already did 4 minus [INAUDIBLE]. AUDIENCE: The u started. Yeah, that one. PROFESSOR: Here. AUDIENCE: Erase it. PROFESSOR: Oh, lambda to the d is 0. Right. Thank you. AUDIENCE: For t star, does factor of 2. PROFESSOR: t star, does it have a factor of 2? Yes, 2 divided by 4. There is a factor of 2 here. Thank you. Look at this. You don't really see much to recommend it. The interesting thing is to find what happens if you are not exactly at the fixed point, but slightly shifted. So we want to see what happens if t is t star plus delta t, u is u star plus delta u, if I shift a little bit. If I shift a little bit, linearizing the equation means I want to know how the new shifts are related to the old shift. And essentially doing things at the linear level means I want to construct a two-by-two matrix that relates the changes delta t delta u to the shifts originally of delta t and delta u. What do I have to do to get this? What I have to do is to take derivatives of the terms for dt by dl with respect to t, with respect to u. Take the derivative with respect to t. What do I get? I will get two. I will get minus 4u n plus 2 kd lambda to the father of d divided by t plus k lambda squared squared. So the derivative of 1 over t became minus 1 over t squared. There is a second order term. So there will be a derivative of that with respect to t multiplying u squared. I want you to calculate it. Delta u, if I make a change in u, there will be a shift here, which is 4n plus 2 kd lambda to the d divided by t plus k lambda squared. From the second order term, I will get minus 2au. For the second equation, if I take the derivative of this variation in t, I will get a plus 4n plus 8u squared kd lambda to the d t plus k lambda squared and so forth cubed. And the fourth place, I will get epsilon minus 8 n plus 8 kd lambda to the d u divided by t plus k lambda squared and so forth squared. Now, I want to evaluate this matrix at the fixed point. So I have to linearize in the vicinity of fixed point. Which means that I put the values of t star and u star everywhere here. And then I have to calculate the eigenvalues of this matrix. Now, note that this element of the matrix is proportional to u star squared. So this is certainly evaluated at the fixed point order of epsilon squared. Order of epsilon squared to me is zero. I don't see order of epsilon squared. So I can get rid of this. Think of a zero here at this order. Which means that the matrix now has zeroes on one side of the diagonal, which means that what is appearing here are exactly the eigenvalues. Let's calculate the eigenvalue that corresponds to this element. I will call it yu. It is epsilon minus 8 n plus 8 kd lambda to the d t star. Well, since I'm calculating things to order of epsilon, I can ignore that t star down there. I have k squared lambda to the four or k squared, lambda squared, and so forth squared. Multiplied by u star. Where is my u star? u star is here. So it is multiplied by n plus 2. Sorry, my u star is up here. k squared lambda to the 4 minus d 4 n plus 8 kd epsilon. Right. Now the miracle happens. So k squared cancels the k squared. Lambda to the four and lambda to the d cancel this lambda to the four minus d. The kd cancels the kd. The n plus 8 cancels the n plus a. 8 cancels the 2. The answer is epsilon minus 2 epsilon, which is minus epsilon. OK? [LAUGHTER] So this direction has become irrelevant. The epsilon here turn to a minus epsilon. This irrelevant direction disappeared. There is this relevant direction that is left, which is a slightly shifted version of what my original [INAUDIBLE] direction was. And you can calculate yt. So you go to that expression, do the same thing that I did over here. You'll find that at the end of the day, you will find 2 minus n plus 2 over n plus 8 epsilon. All these unwanted things, like kd's, these lambdas, et cetera, disappear. You expected at the end of the day to get pure numbers. The exponents are pure numbers. They don't depend on anything. So we had to carry all of this baggage. And at the end of the day, all of the baggage miraculously disappears. We get a fixed point that has only one relevant direction, which is what we always wanted. And once we have the exponent, we can calculate everything that we want, like the exponent for divergence of correlation length is the inverse of that. You can calculate how it has shifted from one half. It is something like n plus 2 over n plus 8 epsilon. And we see that the exponents now explicitly depend on dimensionality of space because of this epsilon. They explicitly depend on the number of components of your order parameter n. So we have managed, at least in some perturbative sense, to demonstrate that there exists a kind of scale invariance that characterizes this ON universality class. And we can calculate exponents for that, at least perturbatively. In the process of getting that number, I did things rapidly at the end, but I also swept a lot of things under the rug. So the task of next lecture is to go and look under the rug and make sure that we haven't put anything that is important away. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 4_The_LandauGinzburg_Approach_Part_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So we have been looking at the problem of phase transitions from the perspective of a simple system which is a piece of magnet. And we find that if we change the temperature of the system, there is a critical temperature, Tc, that separates paramagnetic behavior on the high temperature side and ferromagnetic behavior on the low temperature side. Clearly in the vicinity of this point, whether you're on one side or the other, the magnetization is small and we are relying on that to make us a good parameter to expand things in. The other thing is that we anticipated that over here there are long wavelength fluctuations of the magnetization field and so we did averaging and defined a statistical field, this magnetization as a function of position. Then we said, OK, if I'm just changing temperature, what potential is the behavior of the probability that I will see in my sample some particular configuration of this magnetization field? So there is a functional that governs that. And the statement that we made was that whatever this functional is, I can write it as the exponential of something if I want. This probability is positive. I will assume that it is-- locally there is a probability density that I will integrate across the system. Probability density is a function of whatever magnetization I have around point x. And so then when we expanded this, what did we have? We said that the terms that are consistent with rotational symmetry have to be things like m squared, m to the fourth, and so forth. In principal, there is a long series but hopefully since m is small, I don't have to include that many terms in the series. And additionally, I can have an expansion in gradient and the lowest order term in that series was the gradient of m squared and potentially higher order terms. OK? We said that we could, if we relied on looking at the most probable configuration of this weight, make a connection between what is going on here and the experimental observation. And essentially the only thing that we needed to do was to basically start T from here, so T was made to be proportional to T minus Tc. And then we could explain these kinds of phenomena by looking at the behavior of the most probable magnetization. Now I kind of said that we are going to have long wavelength fluctuations. There was one case where we actually saw a video of those long wavelength fluctuations and that was for the case of critical opalescence taking place at the liquid gas mixture at its critical point. Can we try to quantify that a little bit better? The answer is yes, we can do so through scattering experiments. And looking at the sample was an example of a scattering experiment, which if you want to do more quantitatively, we can do the following- we can say that there is some incoming field, electromagnetic wave that is impingent on the system. It's a pass on incoming wave vector k. It sort of goes through the sample and then when it comes out, it gets scattered and so what I will see is some k [INAUDIBLE] that comes from the other part of the system. In principle, I guess I can put a probe here and measure what is coming out. And essentially it will depend on the angle towards which this has rotated. If I asked well, how much has been scattered? We'd say, well it's a complicated problem in quantum mechanics. Let's say this is a quantum mechanical procedure- you would say that there's an amplitude that you have the scattering that is proportional to some overlap between the kind of state that you started with, what we started is an incoming wave with k initial, presumably there is the initial state of my sample before the wave hits it, and then I end up with the final configuration which is k f, whatever the final version of my system is. Now between these two, I have to put whatever is responsible for scattering this wave so there is in some sense some overall potential that I have to put over here. Now let's think about the case of this thing being, say, a mixture of gas and liquid, well what is scattering light? Well, it is the individual atoms that are scattering light and there are lots of them. So basically I have to sum over all of the scattering elements that I have my system. Let's say I have a u for a scattering element, i that is located at position-- maybe bad choice, let's call it sum over alpha. X alpha is the position of let's say the atom that is scattered here. OK? So now since I'm dealing with, say, linear order, not multiple scattering, what I can do is I can basically take this sum outside. So this thing is related to a sum over alpha of the scattering I would have for individual elements that are scattering. And then roughly each individual element will scatter an amount that I will call sigma q. If you have elastic scattering what happens is that essentially your initial k simply gets rotated without changing its magnitude. So what happens is that essentially everything will end up being a function of this momentum transfer which is proportional to q k f minus k i whose magnitude would be twice the magnitude of your k sine of the half of the angle if you just do the simple geometry over there. So this is for elastic scattering which is what we will be thinking about. Now the amount that each individual element scatters like each atom is indeed a function of your momentum transferred from the scattering probe. But the thing that you're scattering from is something that is very small like an atom, so it turns out that the resulting q will give significant-- well, the resulting sigma will vary all the over scales where q is related to the inverse of whatever is scattering which is something that is very large. So most of this stuff that is happening at small q, most of the variation that is observed, comes from summing over the contributions of the different elements. So going to the continuum limit, this becomes an integral across your system of whatever the density of the thing that is scattering is. Indeed if I'm thinking about the light scattering experiment that we saw with critical opalescence, what you would be looking at is this density of liquid versus gas, which if I want to convert to q, I have to do a Fourier transform here. And so this is the amplitude of scattering that I expect and we can see that it is directly probing the fluctuations of the system, Fourier transform [INAUDIBLE] number q. Eventually of course this is the amplitude what you will be seeing is the amount that is scattered, s of q will be proportional to this amplitude squared. We'll have a part that at small q is roughly constant, so basically at small q I can regard this as a constant. So at small q all of the variations is going to come from this Roth q squared. Of course again thinking about the case of the liquid gas system, where we were seeing the picture, there were variations, so there's essentially lots and lots of these Roth q's depending on which instant of time you're looking at. And then it would be useful to do some kind of a time average and hope that the time average comes from the result of a probability measure such as this. OK? So that's the procedure that we'll follow. We're going to go slightly beyond what we did before. What we did before was we started with the probability distribution, such as this that we posed on the basis of symmetry and then calculated singular behavior of various thermodynamic functions such as heat capacity, susceptibility, magnetization, et cetera, all of them at macroscopic quantities. But this is a probability that also works at the level of microscopics. It's really a probability as a function of our configurations and the way that that is probed is through scattering experiments. So scattering experiments really probe the Fourier transform of this probability that we have posed over here. OK? Now again the full probability that I have written down there is rather difficult. Let's say this in the case of the liquid gas system, this row would be explicitly the magnetization. It would be the fluctuations of the magnetization around the mean. I should note that in the case of the magnet, you can say, well how do you probe things? In that case you need something, some probe that scatters from magnetization at each point. And the appropriate probe for magnetization is neutrons. So you basically hit the system with a beam of neutrons that may be polarizing them in this particular direction, their spins-- and they hit the spins of whatever is in your sample and they get scattered according to this mechanism. And what you will be seeing at small q is related to fluctuations of this magnetization field. Think he was looking for that. Now I realize-- I'm not going to run after him. OK. So teaches them to leave the room earlier. OK. So let's see what we have to-- we can do for the case of calculating this quantity. Now I'm not going to calculate it for the nonlinear form, it's rather difficult. What I'm going to do is to sort of expand on the trick that we were using last time which led to this other point which is to look at the most probable state. So basically we looked at that function where we were calculating the subtle point integration and the first thing that we did was to find the configuration of the magnetization field that was the most likely. And the answer was that because k is positive, your magnetization that extremizes that probability is something that is uniform, does not depend on x, and is pointing all in one direction. Let's call it e hat 1. That is indicating this one thing is symmetry breaking in the zero field limit. Of course that spontaneous symmetry breaking only occurs when t is negative and for t positive m bar is 0. For t negative just minimizing the expression t m squared plus u m to the fourth gives you minus t over 4u. OK? So that's the most probable configuration. What this thing is probing is fluctuations so let's expand around the most probable configuration. So let's say that we-- say that I have thermally excited and m of x which is m bar plus a little bit that varies from each location to another location, like if I'm looking at this critical opalescence, it's the variation in density from one location to another location. But that is true as long as I'm looking at the case of something that has only one component. If I have multiple components, I can also have fluctuations in the remaining m minus 1 directions, and this is n of let's say alpha go from 2 of phi t of e alpha. So I have broken the fluctuations into two types. I've said that let's say if you were n equals to 2 your m bar would be pointing in some particular direction. And so phi l's correspond to increasing the length or decreasing the length whereas phi t corresponds to going in the orthogonal direction which in general would be n minus 1, different components. OK? So I want to ask, what's the probability of this set of fluctuations? All I need to do and again this is x-dependent is to substitute into my general expression for the probability so for that I need a few things. One of the things that I need is the gradient of m squared. The uniform part has no gradient so I will either get the gradient from phi l squared or I will get gradient of the n minus 1 component vector field phi t that I will simply write as gradient of phi t squared. So phi t is an n minus 1 component. I can ask, what is m squared? Basically the first term I need to put over there, but m squared, I have to square this expression. I will get m bar squared to m bar phi l plus phi l squared that comes from the component that is along e 1. All the other components here will add up to give me the magnitude of this transverse field that exists in the other m minus 1 directions. The other term that I need is m to the fourth and in particular we saw that it is absolutely necessary if t is negative to include the m to the fourth term because otherwise the probability that we were writing just didn't make sense and we need to write expressions for probability that are physically sensible. So I just take the line above and square it. But I want to only keep terms to quadratic order. So to zero order I have m bar to the fourth to third order I have-- to first order I have 4 m bar cubed phi l then there are a bunch of terms that are order of phi squared. Squaring this will give me 4 m bar squared phi l squared, but the dot product of these two terms will also gives me two m bar squared phi l squared for a total of six m bar squared phi l squared. And then the phi t squared comes simply from twice m bar squared phi t squared. And then there's higher order terms cubic and fourth order m phi t and phi l that I don't write assuming that the fluctuations that I'm looking at our small around the most probable state. OK? So if I stick with this quadratics then the probability of fluctuations across my system characterized by phi l of x and phi t of x is proportional to exponential of minus integral dd x. I have an overall factor-- well I have a factor of K over 2 for the first term. I have a gradient of phi l squared and then you can see that I have a bunch of them. Let's put the K over 2, let's put the one-half here. I have K phi l gradient of phi l squared. I'm going to put everything that has phi l squared in it. I have here t over 2 phi l squared from t over 2m squared. So I have phi l squared. I have t over 2. Actually I have taken-- I'm going to make mistakes unless I put the one-half over here, too. I have another phi l squared from over here. That gets multiplied by u so I will get here plus 12 u m bar squared, 12 rather than 6 because I divided by 2. And then I have a term that is K over 2 gradient of the vector phi t squared. And then I have t over 2 phi t squared and then 2 m bar squared multiplied by u becomes 4u m bar squared. And there are higher order terms that I will not write down. Yes? Did I-- AUDIENCE: You said phi l [INAUDIBLE]. PROFESSOR: Good. Yes, so the question is I immediately jumped to second order, so what happened to the linear term? There's a linear term here and there's a linear term here. AUDIENCE: [INAUDIBLE] PROFESSOR: Let's write them down. So the coefficient of phi l would be t over 2 m bar plus 4 u m bar cubed and minimizing that expression is setting this first derivative to zero so if you are expanding around an extreme on the most probable state then by construction you're not going to get any terms that are linear either m phi l or m phi t. OK? Yes? AUDIENCE: Is there-- what's the reason for not including a term in our general probability, a term like [INAUDIBLE] of m squared? PROFESSOR: We could. He said that essentially that amount over here to an expansion in powers of the gradient, which if I go to Fourier space this would become q squared plus n squared would be q to the fourth, et cetera. If you are looking at more q or large wavelengths and so we are going to focus on the first few terms. But they exist just as they existed for the phonon spectrum. We looked at the linear portion and then realized that going away from q equals to 0 generates all kinds of other steps. OK? I mean, these are very important things to ask again and again to ultimately convince yourself, because in reality that expansion that I have written has an infinity of terms in it. You have to always convince yourself that close enough to the critical point all of those other terms that I don't write down are not going to make any difference. OK? All right. So that's the weight. What I'm going to do is the same thing that we did last time for the case of Goldstone modes, et cetera, which is to go to a Fourier presentation so any one of the components, be the longitudinal or transverse, I will write as a sum over q e to the i q dot, x some Fourier component and then get a square root of v just so that the normalization would look simple. And if I substitute for phi of x in terms of phi of q, just as we saw last time the probability will decompose into independent contribution for each q because once you substitute it here, every quadratic term will have both a sum over x and-- sorry, and integral over x and sums over q and q prime, e to the i q plus q prime x the integration over x forces q and q prime to be the same up to a sign. So then we find that the probability distribution as a function of these Fourier amplitudes phi alpha and phi t decomposes into a product. Basically each q mode is acting independently of all the others. And also at the quadratic level we see that there is no crosstalk between transverse and longitudinal so we will have one weight for the transverse, one weight for the longitudinal. And what's it actually going to look like? Essentially it's going to look like something that is proportional, let's say, for phi l, it is proportional to phi l of q squared. I have here something and then I have k q squared over 2 so then a Fourier transform that I will get k q squared over 2. So let's write it in this fashion, q over 2 q squared and then that's something that's not q dependent. And by convention I will write it as x e l minus 2. It has dimensions of inverse length scale because q has dimensions of inverse length scale so I will shortly define a length so that these two terms would have the same dimensional, so c l is defined in that fashion. And similarly I have an exponential that governs k over 2 q squared plus c t to the minus 2 phi tilde of t of q squared and this is a vector so there are n minus 1 components there. OK? And you can see that basically potentially these two terms, c l and c t are different. In fact, let's just write down what they are. So this coefficient K over c l squared is defined to be t plus 12 u m bar squared. OK? Question? AUDIENCE: [INAUDIBLE] PROFESSOR: OK. Now this depends on whether you are for t positive or t negative-- better be positive since I have written it as one over some positive quantity-- for t positive, m bar is 0 so this is just t. For t negative, then m bar squared is minus t over 4 u so this becomes minus 3t, so this becomes minus 2t. OK? And k over c t squared is t plus 4u m bar squared. It is t for t positive. For t negative, substitute it for m bar squared, it will give me 0. OK? Actually the top one hopefully you recognize or you remember from last time. We had exactly this expression t and minus 2t when we calculated the susceptibility. This was the inverse susceptibility. In fact, I can be now more precise and call that the inverse of the longitudinal susceptibility. And what we have here is the inverse of the transverse susceptibility. What does that mean? Let me remind you what susceptibility is. Susceptibility is you have a system and you put on a little bit of the field and then see how the magnetization responds. If you are in the ordered phase so that your system is spontaneously pointing in one direction, then if you put the field in this direction, you have to climb this Mexican hat potential and you have to pay a cost to do so. Whereas if I put it the field perpendicular, all that happens is that the magnetization rotates, so it can respond without any cost and that's what this is. These are really the Goldstone modes that we were discussing are the transverse fluctuations that I have written before. So again, we discussed last time you break a continual symmetry you will have Goldstone modes and these Goldstone modes are the ones that are perpendicular to the average magnetization, if you like. OK? So now we have a prediction. We say that if I look at these phi phi fluctuations, I can pick a particular q, let's say, and here I have to put q star in order to get something that is non-zero. Actually that's put here q prime and then pick two different and this is alpha and beta. Well, if I look at this average since the weight is the product of contributions from different q's the answer will be 0 unless q and q prime add up to 0. And if I'm looking at the same q, I better make sure that I'm looking at the same component because the longitudinal and transverse component or any of the n minus 1 transverse components among each other have completely independent Gaussian rates. If I'm now looking at the same Gaussian, for the same Gaussian I can just immediately read off it's variance which is K q squared plus whatever the appropriate c is for that direction whether it's c l or c t potentially would make a difference. OK? Right. So now we have a prediction for our experimentals. I said that these guys can go and measure the scattering as a function of angle at small angle and they can fit how much is scattered as you go as a function of angle and fit it as a function of q. So we predict that if they look at something that's say like phi l squared, if you're thinking about the liquid gas system that's really the only thing that you have because there is no transverse component if you have a Scalar variable. We claim that if you go and look at those critical opalescent pictures that we saw, and do it more precisely and see what happens as a function of the scattered wave number q, that you will get a shape that is 1 over q squared plus c to the minus 2. This kind of shape that is called a Lorentzian is indeed what you commonly see for all kinds of scattering line shapes. OK? So we have a prediction. Of course, the reason that it works is because in principle we know that this series will have higher order terms as we discussed like q to the fourth, q to the sixth, et cetera, but they fall way down here where you're not going to be seeing all that much anyway. Now the place where this curve turns around from being something that is dominated by 1 over K c to the minus 2 to something that falls off as 1 over K q squared or maybe even faster, the borderline is this inverse length scale that we indicated, c l minus 1. So now what happens if I go closer to the phase transition point? As I go closer to the phase transition point, t goes to zero, this c inverse goes towards zero. So if this is for some temperature above Tc and I go to some lower temperature, then what happens is that the curve will start higher and then [INAUDIBLE]. Yeah? Actually it doesn't cross the other curve which is what I wrote down. Just because it starts higher it can bend and go and join this curve at a further point. Eventually when you go through exactly the critical point then you get the union of all of these curves, which is a 1 over q squared type of curve. So right at the point where t equals to 0 the prediction is that the Lorentzian shape, the coefficient that appears in front of q squared vanishes you will see one over q squared. OK? Now the results of experiments in reality that are very happily fitted, the Lorentzian, when you're away from the critical point they claim that when you are exactly at the critical point, it's not quite 1 over q squared. Seems to be slightly different. At Tc, the scattering appears to be more similar to 1 over q to 2 minus a small amount. That's where another critical exponent theta is introduced so that's another thing that ultimately you have to try to figure out and understand. OK? Of course I drew the curves for the longitudinal component. If I look at the curves for the transverse components, and again, by appropriate choice of spin polarized neutrons you can decompose different components of scattering from the magnetization field of a piece of iron, for example. If you are above Tc, there is really no difference between longitudinal and transverse because there is no direction that is selected. And you can see that the forms that you will get above Tc would be exactly the same. When you go below Tc, that's where the difference appears because the length scale that would appear for the longitudinal parameters would be finite and it corresponds to having to push the magnetization above this bottom of the Mexican hat potential whereas there is no cost in the other direction. So if you can probe in the fluctuations that would correspond to these Goldstone modes, you would see the 1 over q squared type of behavior. OK? So the story that we were talking about last time around about the Goldstone modes and they're fluctuating a lot because of their low cost also certainly remains in this case. We have now explicitly separated out the longitudinal fluctuations that are finite because they are controlled by this stiffness of going up the bottom of the potential whereas there is no stiffness associated with the transverse ones. OK? All right, so good. So we've talked about some of the things that are experimentally observed. Any questions? Now we looked at things here in the Fourier space corresponding to this momentum transfer in the scattering experiment, but we can also ask about what is happening in physical space. That is if I have a fluctuation at one point, how much does the influence of that fluctuation propagate in space? So for that I need to calculate things like phi-- let's say, do we need to put an invert, why not?-- phi l of x phi l of x prime. Let's say we want to calculate this quantity. OK? Now I can certainly decompose phi l of x in terms of these Fourier components. And so what do I get? I will get a sum over q-- maybe I should write it in one case explicitly, q and q prime e to i cube of x e to the i q prime the x prime. Two factors of root 3 giving me the V and then I have phi l of q phi l of q prime and the expectation value will go over here. Now we said that the different q's and q primes are uncorrelated. So here I immediately will have a delta function mq plus q prime but if I'm looking at the same q and q prime, I have this factor of 1 over K q squared plus c to the minus 2-- c l to the minus 2. OK? So then the whole thing becomes due to the delta function the sum over 1 q of 1 over V each with i q dot x minus x prime because q prime was 2 minus q. And then I have K 2 squared plus c l to the minus. OK? So then I go to the continuum limit of a large size, the sum over q gets replaced with integral over q d times the density of state. The V's disappear, I will have a factor of 2 pi to the d and what I have is the Fourier transform of k q squared plus c to the minus-- l to the minus 2. And I will write this as minus 1 over K a function I that depends on d dimension and clearly depends on the separation x minus x prime at the correlation length c l. Why I said correlation length shortly becomes apparent. So I introduce a function I d which depends on x and c to be minus integral over q 2 pi to the d Fourier transform of q squared plus c to the minus 2. If that c was not there that's the integral that we did last time and it was the Coulomb potential in d dimension. So this presumably is related to that. And we can use the same trick that we employed to make it explicit last time around. We can take the Laplacian of this potential i d and what happens is that I will bring two factors of i q squared so the minus goes away. I will have the integral d d cubed 2 pi to the d. I will have a q squared, denominator is q squared plus c to the minus 2. I add and subtract the c to the minus 2 to the numerator and I have to Fourier transform. OK? The first part if I divide by the denominator is simply 1. Integral of Fourier transform of 1 will give me a delta function. And then what I have is minus c squared, the same integral that I used to define i d. So this becomes plus i d of x divided by c squared. OK? So whereas in the absence of c you have the potential do to a charge, the presence of c adds this additional term that corresponds to some kind of a damping. So this equation you probably have seen in the context of screened Coulomb interaction and giving rise to the [INAUDIBLE] potential in three dimensions. We would like to look at it in d dimension so that we know what the behavior is in general. OK? So again what I'm looking at is the potential that is due to a charge at the origin so this idea of x in principle only depends on the magnitude of x and not on the direction of it. It is something that has general spherical symmetry in d dimensions. So I use that fact of spherical symmetry to write down what the expression for the Laplacian is. OK? We can again use Gauss's law if you've forgotten or whatever but in the presence of spherical symmetry the general expression for Laplacian in d dimension is this. OK? So if this d was equal to 1 it would be a simple second derivative. And in higher dimensions you would have additional factors if you basically apply Gauss's law to shares around here. You can very easily convince yourself of that. This is some kind of an aerial term that comes in d minus 1 dimension. And then I can write this as either the second derivative if d by the x acts on this, the x to the minus 1 disappears or if it acts on this one, it will gives me d minus 1 x to the d minus 2, x to the d minus 1 gives me an x, the i by d x. So the equation that I have to solve is this object equals to i over c squared plus a delta function [INAUDIBLE]. OK? Now if you vary one dimension you wouldn't have this term at all and you would have-- except that x equals to 0, the second derivative proportional to the function divided by c squared. So you will immediately write a way x equals to 0 that the answer is e to the minus x over c. OK? Actually proportional because you have to fit out with the amplitude, et cetera. Now in higher dimensions what happens is that this solution gets modified, falls off with some additional x to the p but we have to be somewhat careful with this. So let's look at this a little bit more closely. If I were to substitute this ansatz into this expression, what would happen? What I need to do is to take the first and the second derivative. Now if I take the first derivative, the derivative either acts on this factor, gives me a factor of minus 1 over c and then the exponential back, so I can get the i back. If I had an exponential I take a derivative, I will get just minus 1 over psi exponential. If I act on x to the minus b I will get minus p x to the p minus 1, which is different from the original solution by a factor of p over x. OK? If I now take two derivatives I can take the second derivative on I itself and then d I by d x will give me I back with this factor. So I will get 1 over c squared plus 2 P c x plus P squared over x squared with I but that's not the whole story because the derivative can also leave I aside and act on P over x, which if it does so, it will get P over x squared so that will be an additional term here. So that's the second derivative. So now what I have done is I have evaluated with this ansatz the terms that should appear in that equation of a from x equals to 0, so let's substitute it. Everything now I have is proportional to I so I just forget about the I. I have the second derivative 1 over c squared plus 2 P divided by x c plus p p plus 1 divided by x squared. And then I have d minus 1, the first derivative, so I have minus d minus 1 over c minus d minus 1 p over x. Both of these terms get an additional factor of x because of here so I will get x c and x squared and what I have on the right hand side from the origin is I over c squared. Divide by the I, I have 1 over c squared. Now if I'm moving away from x I can organize things in powers of 1 over x. The most important term is the constant and clearly you can see that I chose the decay constant of the exponential correctly as evidenced by the absence or removal of 1 over the constant term on the two sides. But now I have two terms, two types of terms. Terms that are proportional to x squared and terms that are proportional to x psi and there's no way that I can simultaneously satisfy both of these. So the assumption that the solution of this equation is a single exponential divided by a power law is in fact not correct. But it can be correct in two regions. So for x that is much less than psi then the more important term is the 1 over x squared part. For x but going towards 0 the 1 over x squared is more important than 1 over x. OK so then what I do is I will match these two terms and those two terms that are 1 over x squared tell me that P p plus 1 should be P d minus 1. OK? And that immediately getting rid of the p's tells me that the P in this regime is d minus 2. OK? Now the d minus 2 you recall is what we had for the Coulomb potential. Right? So basically at short distances you are still not screened by this additional term. You don't see its effect and you get essentially the standard Coulomb potential whereas if you are away what you get is that you have to match the terms that are proportional to x c because they're more important than 1 over x squared. And there you get that 2 P should be d minus 1 or P should be d minus 1 over 2. OK? So let's just plot that function over here. So if I plot this function as a function of the separation x and it only depends on the magnitude, in fact, what I should plot is minus i d because it's the minus i d that depends on the fluctuations once I divide by K. I find that it has two regimes. Let's say above two dimensions you have one regime that is a simple Coulomb type of potential and the Coulomb potential last time actually we normalized properly. We saw that it is x to the 2 minus d S d d minus 2. The e to x into the minus x over c I can in fact ignore in this regime because I'm at distance x that is much less than c so the exponential term has not kicked in yet whereas I go at large distances and the exponential term does kick in. So the overall behavior is e to the minus x over c. That's the most dominant behavior that you have. On top of that, we have a power log which is x to the power of d minus 1 over 2. Now those of you who know what the screened Coulomb potential is know that the screened Coulomb potential in three dimensions is the 1 over r, the Coulomb potential, and you put an exponential on top of that. There is no difference in the powers that you have whether or not you are smaller than this correlation length or larger. You can check here, if I put d equals to 3, this becomes a 1 over x and this becomes a 1 over x. So it's just an accident of three dimensions that the screened Coulomb potential is the 1 over r with an exponential on top. In general dimensions you have different powers. But having different powers also means that somehow the amplitude that goes over here has to carry dimensions so that it can be matched to what we have here at this distance of C. And so if I try to match those terms, roughly when you're at order of c, what I would do is I would put s d d minus 2 and c to the 3 minus d over 2. And now you can check that the two expressions will have the right dimension and will match roughly at order of c. OK? So essentially what it says is that if I ask in my system what is the nature of these fluctuations, how correlated they are, they would know to be more or less the same although falling off as if you were at the critical point because we said that the critical point or when you have Goldstone modes, you have just this term. But then they know that you are not exactly sitting at the critical point and then they are no longer correlated. So basically there is this length scale that we also saw when we were looking at these critical opalescence and we were seeing things that were moving together. That length scale where things are moving together is this parameter c that we have defined over here. So what we have is-- where do we want to put it? Let's put it here. A correlation length which measures the extent to which things are fluctuating together, although when I'm saying fluctuating together, they are still correlations that are falling off but they're not falling off exponentially. They start to fall off exponentially when you are beyond this length scale c. And we have the formula for c. So what we find is that if I were to invert that, for example, what I find for c l as a function of t is that it is simply square root of k over t when I am on the t positive side. When I go to the t negative side, it just becomes square root of K minus 2 t. OK? So this correlation length I indicated we could state has behavior close to a transition, there's a divergence. We can parametrize those divergences through something like t minus Tc to exponent u, potentially different on the two sides of the transition. But this t is simply proportional to the real t minus Tc so we conclude that u plus is the same as u minus we've just indicated by u, should be one-half. The amplitudes themselves depend on all kinds of things. We don't know much about them. But we can see that the amplitude ratio B plus over B minus, if I were to divide those two the ratio of those two is universal, it gives me a factor of square root of 2. OK? If I were to plot c t, for example, on the high temperature side c t and c l are of course the same. On the low temperature side, we said that the Goldstone modes have these long range correlations. They fall off or grow according to the Coulomb potential but there is no length scale so in some sense the correlation length for the transverse modes is always infinity. OK. Now actually in the second lecture what I said was that the fact that the response function such as susceptibility diverges immediately tells you that there have to be long range correlations, so we had predicted before that c has to diverge. But we were not sufficiently precise about the way that it does, so let's try to do that. Let's see. A relationship more precisely with susceptibility and these correlation lengths, so what we said more generally was that the susceptibilities up to various factors of data, et cetera, that are not that important are related to the integrated magnetization to magnetization connected correlation. So basically, what I have to do is to look at m minus its average at x, m minus its average at some other point, which means that what I'm really looking at is the phi phi averages. OK? Now what we have shown right now is that these averages are significant. These phi phi correlations are significant only over a distance that is this correlation length and then they die off. So we could basically as far as scaling and things like that is concerned terminate this integration at c. And that when we are looking at distances that are below that, you don't see the effect of the exponential, you just see the Coulomb power law so you would see here fluctuations that decay as x to the 2 minus d. Right? So essentially what you're doing is integrating x to the 2 minus d in d dimension of space. So you can see that immediately gets related to the square of the correlation length. X to the minus d n d d x, the d part vanishes, there's a 2 that remains and gives you c squared. If you like you can write it in spherical coordinates, et cetera, but dimensions have to work out to be something like this. So now we can-- yes? AUDIENCE: Just to clarify, when you sat phi of x and phi of 0, are those both longitudinal or both transverse? PROFESSOR: I wasn't precise so if I, thinking about chi l, these will be both longitudinal. OK? And then we have this expression. If I'm talking about chi t and I'm above the transition temperature, there's no problem. If I'm below the transition temperature, I can use the same thing but have to set c to infinity so I have to integrate all the way to infinity. OK? But now you can see that the divergence of susceptibility is very much related to the divergence of correlations, in some sense very precisely in that if this goes like t to the minus gamma and the correlation length diverges as t to the minus u, then gamma should be 2 nu. And indeed, our ne is one-half. We had seen previously that gamma was 2 nu. Secondly that the amplitude ratio for susceptibility should be the square of the amplitude ratio for the correlation length and again this is something that we have seen before. The amplitude ratio for susceptibility was the square root of 2. Now it turns out that all of this is a gain within this [INAUDIBLE] point approximation looking at the most probable state, et cetera. Because what we find in reality is that at the critical point, the correlations don't decay simply according to the Coulomb law but there is this additional eta which is the same eta that we had over here. OK? And that because of that eta, here what you would have is 2 minus eta and you would get an example of a number of things that we will see a lot later on. That is there even if you don't know what the exponents are, you know that there are relationships among the exponents. This is an example of an exponent identity called a Fisher exponent and there are several of these exponents identities. OK? But that also brings us to the following- that we did all of this work and we came up with answers for the singular behaviors at critical points and why they are universal. And actually as far as the thermodynamic quantities were concerned, all we ended up doing was to write some expression that was analytical and then find its minimum. And we found that the minimum of an analytical expression always has the same type of singularities, which we can characterize by these exponents. So maybe it's now a good time to check how these match with the experiment. So let's look at the various types of phase transition, an example of the material that undergoes that phase transition, and what the exponents alpha, beta, gamma, and u are that are experimentally obtained. AUDIENCE: What is this again? PROFESSOR: The material that undergoes a transition, so for example when we are talking about the ferromagnet to paramagnet transition, you could look at material such as iron or nickel and if we ask in the context of this systematics that we were developing for the Landau-Ginzburg what they correspond to, they are things that have three components or fields so they correspond to n equals to 3. Of course everything that I will be talking to in this column will correspond to 3-dimensional systems. Later we'll talk also about 2-dimensional and other systems, but let's stick with real 3-dimensional world. So that would be one set. We will look at super fluidity. Let's say in helium, which we discussed last semester, that corresponds to n equals to 2. We will talk about various examples of liquid gas transition which correspond to a scalar density difference. And this could be anything from say carbon dioxide, neon, argon, whatever gas we like. And also talk about superconductors which to all intents and purposes should have the same type of symmetries as super fluids. An example of a quantum system should be n equals to 2, gained lots of different cases such as aluminum, copper, whatever. So what do we find for the exponent? Actually for ferromagnetic system the heat capacity does not diverge. It has a discontinuous derivative at the transition and kind of goes in a manner that if you take its derivative then the derivative appears to be singular and corresponds to an alpha. If you try to fit it to it's slightly negative. The superfluid has this famous lambda shape for its heat capacity and a lambda shape is very well fitted to a logarithm type of function. The logarithm is the limit of a power law as the exponent goes to 0 so we can more or less indicate that by an alpha of 0 or really it's a divergent log. These objects, the liquid gas transition does have weakly divergent heat capacity so the alpha is around 0.1. The values of betas are all less than one-half, for ferromagnet system is of the order of 0.4. It is almost one-third, slightly less for superfluid helium and less for the liquid gas system. Gammas something like 1.4. We don't have a gamma for superfluid, you can't put a magnetic field on the superfluid. There's nothing that is conjugate to the quantum phase. Here it is more like 1.3. Mu is 0.-- it's not-- [INAUDIBLE]. OK. So what I have here it is more like 1.3, 1.24, 0.7, 0.67, 0.63. OK? Now they are different from the predictions that we had. Predictions that we had where alpha was 0 discontinuous. Beta goes to one-half. Gamma was 1, mu equals to one-half. And actually these predictions that we just made happen to match extremely well with all kinds of super conducting systems that you look at. So again it is important to state that within a particular class like liquid gas you can do a lot of different systems. We saw that curve in the second lecture. They all correspond to this same set of exponents, singularly for a different magnet and so forth. So there is something that is universal but our Landau-Ginzburg approach with this looking at the most probable state and fluctuations around it has not captured it for most cases but for some reason has captured it for the case of superconductors. So we have that puzzle and starting from next lecture we'll start to unravel that. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 17_Series_Expansions_Part_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Let's start. So we are back to calculating partition functions for Ising models. So again some in general hypercubic lattice where you have variables sigma i being minus plus 1 at each side with a tendency to be parallel. And our task is to calculate the partition function which for n sides amounts to summing over 2 to the n binary configurations of a weight that favors near neighbors to be in the same state with a strength K. OK. So the procedure that we are going to follow is this high temperature expansion, that is writing this as hyperbolic cosine of k 1 plus t standing for tan K sigma i, sigma j. And then as we discussed, actually this i would have to write as a product over all bonds. And then essentially to each bond on this lattice I would have to either assign 1 or t sigma i, sigma j. Binary choice now moves to bonds. And then we saw that in order to make sure that following the summation over sigma these factors survived we have to make sure that we construct graphs where out of each side go out an even number of bonds. And we rewrote the whole thing as 2 to the n, cos K to the number of bonds, which for a hypercubic lattice is dn. And then we had a sum over graphs where we had an even number of bonds per side. And the contribution of the graph was basically t to the power of the number of bonds. So I'm going to call this sum over here s. After all, the interesting part of the problem is captured in this factor s, these 2 to the n cos K to the power of dn are perfectly well behaved analytical functions. We are looking for something that is singular. So this s I can either pick one from all of the factors-- so to the lowest order in t I would have to do to one. And then we saw that the next order correction would be something like a square, would be this t to the 4th. And then objects that are more complicated versions. And I can write all of those as a sum over kind of graphs that I can draw on the lattice by going around and making a loop. But then I will have graphs that will be composed of two loops, for example, that are disconnected. Since this can be translated all over the place this would have a factor of n for a lattice that is large for getting side effects and edge effects. This would have a contribution once I slide these two with respect to each other. That is, of the order of n squared. And then I would have things that would three loops and so forth. Now based on what we have seen before it is very tempting to somehow exponentiate this sum and write it as exponential of the sum over objects that have a single loop. And I will call this new sum actually s prime for reasons to become apparent because if I start to expand this exponential, what do I get? I will certainly start to get 1, then I will have the sum over single loop graphs plus one half whatever is in the exponent over here squared, 1 over 3 factorial whatever is in the exponent, which is this sum cubed, and so forth. And if I were to expand this thing that is something squared, I will uncertainty get terms in it that would correspond to two of the terms in the sum. Here I would get things that would correspond to three of the terms in the sum. And the communitorial factors would certainly work out to get rid of the 2 and the 6, depending on-- let's say I have loop a, b, or c, I could pick a from the first sum here, b from the second, c from the third or any permutation thereafter that would amount to c. I would have a factor of c abc. But s definitely is not equal to s prime because immediately we see that once I square this term a plus b squared, in addition to a squared ab I will also get a squared and b squared, which corresponds to objects such as this. One half of the same graph repeated twice. Right? It is the a squared term that is appearing in this sum. And this clearly has no analog here in my sum s. I emphasize that in the sum s each bond can occur either zero or one time. Whereas if I exponentiate this you can see that s prime certainly contains things that potentially are repeated twice. As I go further I could have multiple times. Also when I square this term I could potentially get 1 factor from 1, another factor, which is also a loop from the second term, the second in this product of two brackets, which once I multiply them happen to overlap and share something. OK? All right. So very roughly and hand wavingly, the thing is that s has loops that avoid each other, don't intersect. Whereas s prime has loops that intersect. So very naively s is sum over, if you like, a gas of non intersecting loops. Whereas s prime is a sum over gas of what I could call phantom loops. They're kind of like ghosts. They can go through each other. OK? All right. So that's one of a number of problems. Now in this lecture we are going to ignore this difference between s and s prime and calculate s prime, which therefore we will not be doing the right job. And this is one example of where I'm not doing the right job. And as I go through the calculation you may want to figure out also other places where I don't know correctly reproduce the sum s in order to ultimately be able to make a calculation and see what it comes to. Now next lecture we will try to correct all of those errors, so you better keep track of all of those errors, make sure that I am doing everything right. Seems good-- all right. So let's go and take a look at this s prime. So s prime, actually, log of s prime is a sum over graphs that I can draw on the lattice. And since I could exponentiate that, there was no problem in the exponentiation involved over here. Essentially I have to, for calculating the log, just calculate these singly connected loops. And you can see that that will work out fine as far as extensivity is concerned because I could translate each one of these figures of the loops over the lattice and get the factors of n. So this is clearly something that is OK. Now since I'm already making this mistake of forgetting about the intersections between loops I'm going to make another assumption that is in this sum of single loops. I already include things where the single loop is allowed to intersect itself. For example, I'm going to allow as a single loop entity something that is like this where this particular bond will give a factor of t squared. Clearly I should not include this in the correct sum. But since I'm ignoring intersections among the different groups and I'm making it phantom, let's make it also phantom with respect to itself, allow it to intersect itself [INAUDIBLE]. It's in the same spirit of the mistake that we are going to do. So now what do we have? We can write that log of s prime is this sum over loops of length l and then multiply it by t to the l. So basically I say that I can draw loops, let's say of size four, loops of size six. Each one of them I have to multiply by t to the l. Of course, I have to multiply with the number of loops of length l. OK? And this I'm going to write slightly differently. So I'm going to say that log of s prime is sum over this length of the loop. All the loops of length l are going to contribute a factor of t to the l. And I'm going to count the loops of length l that start and end at the origin. And I'll give that a symbol wl00. Actually, very soon I will introduce a generalization of this. So let me write the definition. I define a matrix that is indexed by two side of the lattice and counts the number of walks that I can have from one to the other, from i to j in l steps. So this is number of walks of length l from i to j. OK? So what am I doing here? Since I am looking at single loop objects, I want to sum over, let's say, all terms that contribute, in this case t to the 4. It's obvious because it's really just one shape. It's a square, but this square I could have started at anywhere on the lattice. And this slight factor, which captures the extensivity, I'll take outside because I expect this log of s to be extensive. It should be proportional to n. So one part of it is essentially where I start to draw this loop. So I say that I always start with the loops that I have at point zero. Then I want to come back to myself. So I indicate that the end point should also be zero. And if I want to get a term here, this is a term that is t to the fourth. I need to know how many of such blocks I have. Yes? AUDIENCE: Are you allowing the loop to intersect itself in this case or not? PROFESSOR: In this case, yes. I will also, when I'm calculating anything to do with s prime I allow intersection. So if you are asking whether I'm allowing something like this, the answer is yes. AUDIENCE: OK. PROFESSOR: Yeah. AUDIENCE: And are we assuming an infinitely large system so that-- PROFESSOR: Yes. That's right. So that the edge effects, you don't have to worry about. Or alternatively you can imagine that you have periodic boundary conditions. And with periodic boundary conditions we can still slide it all over the place. OK? But clearly then the maximal size of these loops, et cetera, will be determined potentially by the size of the lattice. Now this is not entirely correct because there is an over counting. This 1one square that I have drawn over here, I could have started from this point, to this point, this point. And essentially for something that has length l, I would have had l possible starting points. So in order to avoid the over counting I have to divide by l. And in fact I could have started walking along this direction, or alternatively I could have gone in the clockwise direction. So there's two orientations to the walk that will take me from the origin back to the origin in l steps. And not to do the over counting, I have to divide by 2l. Yes? AUDIENCE: If we allow walking over ourselves is it always a degeneracy of 2l? PROFESSOR: Yes. You can go and do the calculation to convince yourself that even for something as convoluted as that, you can... is too. PROFESSOR: OK. So this is what we want to calculate. Well, it turns out that this entity actually shows up somewhere else also. So let me tell you why I wanted to write a more general thing. Another quantity that I can try to calculate is the spin spin correlation. I can pick spin zero here and say spin r here, some other location. And I want to calculate what is the correlation between these two spins. OK? So how do I have to do that for the Ising model? I have to essentially sum over all configurations with an additional factor of sigma zero sigma r of this weight, e to the k, sum over ij, sigma i, sigma j, appropriately normalized, of course, by the partition function. And I can make the same transformation that I have on the first line of these exponential factors to write this as a sum over sigma i, I have sigma zero, sigma r, and then product over all bonds of these factors of one plus t sigma i, sigma j. The factors of 2 to the n cos K will cancel out between the numerator and the denominator. And basically I will get the same thing. Now of course the denominator is the partition function. It is the sum s that we are after, but we can also, and we've seen this already how to do this, express the sum in the numerator graphically. And the difference between the numerator and the denominator is that I have an additional sigma sitting here, an additional sigma sitting there, that if left by themselves will average out to 0. So I need to connect them by paths that are composed of factors of t sigma sigma, originating on one and ending on the other. Right? So in the same sense that what is appearing in the denominator is a sum that involves these loops, the first term that appears in the numerator is a path that connects zero to r through some combination of these factors of t, and then I have to sum over all possible ways of doing that. But then I could certainly have graphs that involves the same thing. And the loop, there is nothing that is against that, or the same thing, and two loops, and so forth. And you can see that as long as, and only as long as I treat these as phantom objects that can pass through each other I can factor out this term and the rest of 1 plus 1, loop plus 2 loop, is exactly what I have in the denominator. So we see that under the assumption of phantomness, if phantom then this becomes really just the sum over all paths that go from 0 to r. And of course the contribution of each path is how many factors of t I have. Right? So I have to have a sum over the length of this path l. Paths of length l will contribute a factor of t to the l. But there are potentially multiple ways to go from i0 to r in l steps. How many ways? That's precisely what I call this 0w of lr. Yes? AUDIENCE: Why does a graph that goes from 0 to r in three different ways have [INAUDIBLE]? PROFESSOR: OK. So you want to have to go from 0 to r, you want to have a single path, and then you want that path to do something like this? AUDIENCE: Yeah. That doesn't [INAUDIBLE]. PROFESSOR: That's fine. If I ignore the phantomness condition this is the same as this multiplied by this, which is a term that appears in the denominator and cancels out. AUDIENCE: But you're assuming that you have the phantom condition. So this is completely normal. It doesn't matter. PROFESSOR: I'm not sure I understand your question. You say that even without the phantom condition this graph exists. AUDIENCE: With them phantom condition-- PROFESSOR: Yes. AUDIENCE: --this graph is perfectly normal. PROFESSOR: Even without the phantom condition this is an acceptable graph. Yeah. OK. Yeah? AUDIENCE: So what does phantomness mean? Why then we can simplify only a [INAUDIBLE]? PROFESSOR: OK. Because let's say I were to take this as a check and multiply it by the denominator. The question is, would I generate a series that is in the numerator? OK? So if I take this object that I have said is the answer, I have to multiply it by this object. And make sure that it introduces correctly the numerator. The question is, when does it? I mean, certainly when I multiply this by this, I will get the possibility of having a graph such as this. And from here, I can have a loop such as this. And the two of them would share a bond such as that. So in the real Ising model, that is not allowed. So that's the phantomness condition that allows me to factor these things. OK? All right. So we see that if I have this quantity that I have written in red, then I can calculate both correlation functions, as well as the free energy, log of the partition function within this phantomness assumption. So the question is, can I calculate that? And the answer is that calculating number of random walks is one of the most basic things that one does in statistical physics, and easily accomplished as follows. Basically, I say that, OK, let's say that I start from 0, actually let's do it 0 and r, and let's say that I have looked at all possible paths that have l steps and end up over here. So this is step one, step two, step number three. And the last one, step l, I have purposely drawn as a dotted line. Maybe I will pull this point further down to emphasize that this is the last one. This is l minus 1 is the previous one. So I can certainly state that the number of walks from 0 to r in l steps. Well, any walk that got to r in l steps, at the l minus one step had to be somewhere. OK? So what I do is I do a number of walks from 0 to r prime, I'll call this point r prime, in l minus one steps. And then times the number of ways, or number of walks from r prime to r in one step. So before I reach my destination the previous step I had to have been somewhere, I sum over all possible various places where that somewhere has to be, and then I have to make sure that I can reach from that somewhere in one step to my destination. That's all that sum is, OK? Now I can convert that to mathematics. This quantity is start from 0, take l steps, arrive at r. By definition that's the number. And what it says is that this should be a sum over r pi, start from 0, take l minus 1 steps, arrive at r prime. Start from r prime, take one step, arrive at your destination r, sum over r prime. OK? Now these are n by n matrices that are labeled by l. Right? So these, being n by n matrices, this summation over r prime is clearly a matrix multiplication. So what that says is that summing over r prime tells you that that is w 1, w of l minus 1, 0. And that is true for any pair of elements, starting and final points. So basically, quite generically, we see that w, the matrix that corresponds to the count for l steps, is obtained from the matrix that corresponds to the count for one step multiplying that of l minus 1 steps. And clearly I can keep going. wl minus 1, I can write as w 1, w of l minus 2, and so forth. And ultimately the answer is none other than the entity that corresponds to one step raised to the l power. And just to make things easier on my writing, I will indicate this as t to the l where t stands for this matrix for one set. OK? This condition over here that I said in words that allows me to write this in this nice matrix form is called the Markovian condition. The kind of walks that I have been telling you our Markovian in the sense that they only they depend on where you came from at the last step. They don't have memory of where you had been before. And that's what enables us to do this. And that's why I had to do the phantom condition because if I really wanted to say that something like this has to be excluded, then the walk must keep memory of every place that it had been before. Right? And then it would be non Markovian. Then I wouldn't have been able to do this nice calculation. That's why I had to do this phantomness assumption so that I forgot the memory of where my walk was previously. OK? Now... Yeah? Question? No? So this matrix, where you can go in one step, is really the matrix of who is connected to whom. Right? So this tells you the connectivity. So for example, if I'm dealing with a 2D square lattice the sides on my lattice are labeled by x and y. And I can ask, where can I go in one step if I start from x and y? And the answer is that either x stays the same and why y shifts by one, or y stays the same and x shifts by one. These are the four non zero elements of the matrix that allows you to go on the square lattice either up, down, right, or left. And the corresponding things that you would have for the cube or whatever lattice. OK? Now you look at this and you can see that what I have imposed clearly is such that for a lattice where every side looks like every equivalent side this is really a function only of the separation between the two points. It's an n by n matrix. It has n squared elements, but the elements really are essentially one column that gets shifted as you go further and further down in a very specific way. And whenever you have a matrix such as this translational symmetry implies that you can diagonalize it by Fourier transformation. And what do I mean by that? This is, I can define a vector q such that it's various components are things like e to the i q dot r in whatever dimension. Let's normalize it by square root of n. And I should be sure that that is an eigenvector. So basically my statement is that if I take the matrix t, act on q, then I should get some eigenvalue in the vector back. And let's check that for the case of the 2D system. So for the 2D system, if I say that x y t qx qy, what is it? Well, that is x y t x prime, y prime, the entity that I have calculated here, x prime, y prime qx, qy. And of course, I have a sum over x prime and y prime. That's the matrix product. And so again, remember this entity is simply e to the i qx x prime plus qy y prime, divided by square root of n. And because this is a set of delta functions, what does it do? It basically sets x prime either to x plus 1, or x minus 1, y prime either to y or y minus 1, y minus 1. You can see that you always get back your e to the i qx x plus qy y with a factor of root n. So essentially that by the delta functions just changes the x primes to x at the cost of the different shifts that you have to do over there, which means that you will get a factor of e to the i qx, e to the minus iqx, with qy not changing, or e to the i qy, and e to the minus i qy with the x component not changing. So this is the standard thing that you have seen, is none other than 2 cosine of qx, plus cosine of qy. And so we can see that quite generally in the d dimensional hypercube, my t of q is going to be the sum over all d components of these factors of cosine of q alpha. And that's about it, OK? So why did I bother to this diagonalization? The answer is that that now allows me to calculate everything that I want. So, for example, I know that this quantity that I'm interested in, sigma 0 sigma r, is going to be a sum over l, t to the l, then 0. wl is t to the l times r. Right? Now this small t, I can take inside here and do it like this. And if I want, I can write this as the 0 r component of a sum over l tt raised to the power of l. So it's a new matrix, which is essentially sum over l, small t times this connectivity matrix to the l th power. This is a geometric series. We can immediately do the geometric series. The Answer is 0, 1 over 1 minus tt going all the way to r. OK? And the reason I did this diagonalization is so that I can calculate this matrix element, because I don't really want to invert a whole matrix, but I can certainly invert the matrix when it is in the diagonal basis because all I have to do is to invert pure numbers. So what is done is to go to the Fourier basis and rotate to the Fourier basis calculate this. It is diagonal in this basis. I have q r. And so what is that? These are these exponentials here evaluated at the origin, so it's just 1 over root n. This is 1 over root n. This is e to the i q dot r over root n. This is just the eigenvalue that I have calculated over here. So this entity is none other than a sum over q, e to the i q dot r divided by n, two factors of square root of n. 1 over 1 minus t-- well, actually let's write it 2t-- sum over alpha cosine of q alpha. And then of course I'm interested in big systems. I replace the sum over q with an integral over q, 2 pi to the d density of states. In going from there to there, there's a factor of volume, the way that I have set the unit of length in my system. The volume is actually the number of particles that I have. So that factor of n disappears. And all I need to do is evaluate this factor of 1 minus 2t sum over alpha cosine of q alpha integrated over q Fourier transformed. OK? Yes? AUDIENCE: So I notice that you have a sum over q, but then you also have a sum alpha q alpha. PROFESSOR: Right. AUDIENCE: Is there a relationship between the q and the q alpha or not? PROFESSOR: OK. So that goes back here. So when I had two dimensions, I had qx and qy. Right? And so I labeled, rather than x and y, with q1 and q2. So the index alpha is just a number of spatial dimensions. If you like, this is also dq1, dq2, dqd. AUDIENCE: OK. [INAUDIBLE] PROFESSOR: OK. All right. So we are down here. Let's proceed. So what is going to happen? So suppose I'm picking two sides, 0 and r, let's say both along the x direction, some particular distance apart. Let's say seven, eight apart. So in order to evaluate this I would have an integral if this is the x direction of something like e to the iq x times r. Now when I integrate over qx the integral of e to the iqx times r would go to 0. The only way that it won't go to 0 is from the expansion of what is in the denominator. I should bring on enough factors of e to the minus iqx, which certainly exist in these cosine factors, to get rid of that. So essentially, the mathematical procedure that is over here is to bring in sufficient factors of e to the minus i q dot r to eliminate that. And the number of ways that you are going to do that is precisely another way of capturing this entity, which means that clearly if I'm looking at something like this, and I mean the limit that t goes to be very, very small so that the lowest order in t contributes, the lowest order t would be the shortest path that joins these two points. So it is like connecting these two points with a string that is very tight. So that what I am saying is that the limit as t goes to 0 of something like sigma 0, sigma r is going to be identical to t to the minimum distance between 0 and r. Actually, I should say proportional, is in fact more correct. Because there could be multiple shortest paths that go between two points. OK? Now let's make sense. There's kind of an exponential. Essentially I start with the high temperature limit where the two things don't know anything about each other. So sigma 0, sigma r is going to be 0. So anything that is beyond 0 has to come from somewhere in which the information about the state of the site was conveyed all the way over here. And it is done through passing one bond at a time. And in some sense, the fidelity of each one of those transformations is proportional to t. Now as t becomes larger you are going to be willing to pay the costs of paths that go from 0 to r in a slightly more disordered way. So your string that was tight becomes kind of loose and floppy. And why does that become the case? Because now although these paths are longer and carry more factors of this t that is small, there are just so many of them that the entropy, the number of these paths starts to dominate. OK? So very roughly you can state the competition between them as follows, that the contribution of the path of length l, decays like t to the l, but the number of paths roughly grows like 2d to the power of l. And since we have this phantomness character, if I am sitting at a particular site, I can go up, down, right, left. So at each step I have in two dimensions a choice of four, in three dimensions a choice of six, in d dimensions a choice of 2d. So you can see that this is exponentially small, this is exponentially large. So they kind of balance each other. And the balance is something like e to the minus l over some typical l that we'll contribute. And clearly the typical l is going to be finite as long as 2dt is less than 1. So you can see that something strange has to happen at the value where tc is such that 2dtc is equal to 1. At that point the cost of making your paths longer is more than made up by increasing the number of paths that you can have, the entropy starts to be. And you can see that precisely that condition tells me whether or not this integral exists, right? Because one point of this integral where the integrand is largest is when q goes to 0. And you can see that as q goes to 0, the value in the denominator is 1 minus 2td. So there is precisely a pole when this condition takes place. And if I'm interested in seeing what is happening when I'm in the vicinity of that transition, right before these paths become very large, what I can do is I can start exploring what is happening in the vicinity of that pole. So 1 minus 2d, sum over alpha cosine of q alpha, how does it behave? Each cosine I can start expanding around its q going to 0 as 1 minus q squared over 2, so you can see that this is going to be 1 minus 2td. And then I would have plus t q squared, because I will have q1 squared plus q2 squared plus qd. So this q squared is the sum of all the q's. And then I do have higher order terms. Order of q to fourth, and so forth. OK? So if I'm looking in the vicinity of t going to tc, this is roughly tc q squared. This is something that goes to 0 and once I factor out the tc, I can define a length squared in this fashion, inverse length squared. OK? PROFESSOR: You can see that this inverse length squared is going to be 1 over tc 1 minus 2d, which is 1 over tc times t. So you can see that this is none other than tc minus t. And if I'm looking at the vicinity of that point, I find that the correlation between sigma 0 sigma r is approximately the integral ddq 2 pi to the d, Fourier transform of the denominator that I said is approximately 1 over tc times q squared plus z to the minus 2. We've seen this before. You evaluated this Fourier transform when you were doing Landau Ginzburg. So this is something that grows when you are looking at distances that are much less than this correlation length as the Coulomb power law. When you are looking at distances that are much larger than the correlation event, you get the exponential decay with this r to the d minus 1 over 2 factor. So what we find is that the correlation of these phantom loops is precisely the correlation that we had seen for the Gaussian model, in fact. It has a correlation length that diverges in precisely the same way that we had seen for the Gaussian model with the square root singularity. So this is our usual mu equals 1/2 type of behavior that we've seen. And somehow, by all of these routes, we have reproduced some property of the Gaussian model. In fact, it's a little bit more than that, because we can go back and look at what we had here for the free energy. So let's erase the things that pertain to the correlation length and correlations, and focus on the calculation that we kind of left in the middle over here. So what do we have? We have that log of S prime, the intensive part, is a sum over the lengths of these loops that start and end at the origin. And the contribution of a loop of length l is small t to the l. And since w of l is the connectivity matrix to the l power, it's really like looking at the matrix element of this entity. And of course, there is this degeneracy factor of 2l. And I can write this as 1/2 sum over l. Well, let's do it this way-- the 0 th, 0 th element of sum over l dt to the l over l. And what is this? This is, in fact, the series expansion of minus log of 1 minus dt. So I can, again, go to the Fourier basis, write this as minus 1/2 sum over q0, 0 q log of 1 minus t the eigenvalue t of q, and then q0. Each one of these is just the factor of 1 over square root of n. The sum over q goes over to n integral over q. So this simply becomes minus 1/2 integral over q 2 pi to the d log of 1 minus t, this sum over alpha cosine of q alpha [INAUDIBLE] 2 that we had over here. And again, if I go to this limit where I am close to tc, the critical value of this t, and focus on the behavior as q goes to 0, this is going to be something that has this q squared plus c to the minus 2 type of singularity. And again, this is a kind of integral that we saw in connection with the Gaussian model. And we know the kind of singularities it gives. But why did we end up with the Gaussian model? Let's work backward. That is, typically, when we are doing some kind of a partition function of a Gaussian model-- let's say we have some integral over some variables phi i. Let's say we put them on the sides of a lattice. And we have e to the minus phi i some matrix m ij phi j over 2 sum over i and j implicit over there. What was the answer? Then the answer was typically proportional to 1 over the determinant of this matrix to the 1/2, which, if I exponentiated, would be exponential of minus 1/2 logarithm of the determinant of this matrix. So that's the general result. And we see the result for our log of S prime is, indeed, the form of 1/2 minus 1/2 of the log of something. And indeed, this sum over q corresponds to summing over the different eigenvalues. And if I were to express det m in terms of the product of its eigenvalues, it would be precisely that. So you can see that actually, what we have calculated by comparison of these two things corresponds to a matrix m ij, which is delta ij minus t times this single step connectivity matrix that I had before. So indeed, the partition function that I calculated, that I called Z prime or S prime, corresponds to doing the following-- doing an integral over phi i's from the delta ij. Essentially for each phi i, I would have a factor of minus phi i squared over 2. So essentially, I have to do this. And then from here, once it's exponentiated, I will get a factor of e to the sum over ij this t phi i phi j. So you can see that I started calculating Ising variables on this lattice. The result that I calculated for these phantom walks is actually identical if I had to replace the Ising variables with just quantities that I integrate all over the place, provided that I weigh them with this factor. So really, the difference between the Ising and what I have done here can be captured by putting a weight for the indirect integration per site. So if I really want to do Ising, the weight that I want to do for the Ising-- let's do it this way-- for phi has to have a delta function at minus 1 and a delta function at plus 1. Rather than doing that, I have calculated a w that corresponds to the Gaussian where the weight for each phi is basically a Gaussian weight. And if I really wanted to do the Landau Ginzburg, all I would need to do is to add here a phi to the 4th. The problem with this Gaussian-- the phantom system that I have-- is the same problem that we had with the Gaussian model. It only gives me one side of the phase transition. Because you see that I did all of these calculations. All of these calculations were consistent, as long as I was dealing with t that was less than tc. Once I go to t that is greater than tc, then this denominator that I had became negative. It just doesn't make sense. Correlations negative don't make sense. The log, the argument that I have to calculate here, if t is below-- is larger than 1 over 2d, then it doesn't make sense. And of course, the reason the whole theory doesn't make sense is kind of related to the instability that we have in the Gaussian model. Essentially, in the Gaussian model also, when t becomes large enough, this phi squared is not enough to remove the instability that you have for the largest eigenvalue. Physically, what that means is that we started with this taut string. And as we approached the transition, the string became more flexible. And in principle, what this instability is telling me is that you go below the transition of t greater than tc, and the string becomes something that can go over and over itself as many times, and gain entropy further and further. So it will keep going forever. There is nothing to stop it. So the phantomness condition, the cost that you pay for it, is that once you go beyond the transition, you essentially overwhelm yourself. There's just so much that is going on. There is nothing that you can do. So that's the story. Now, let's try to finally understand some of the things that we had before, like this other critical dimension of 4. Where did it come from, et cetera? You are now in the position to do things and understand things. First thing to note is, let's try to understand what this exponent mu equal to 1/2 means. So we said that if I think about having information about my site at the origin, that has to propagate so that further and further neighbors start to know what the information was at site sigma 0-- that that information can come through these paths that fluctuate, go different distances, and eventually, let's say, reach a boundary that is at size r. As we said, the contribution of each path decays exponentially, but the number of paths grows exponentially. And so for a particular t that is smaller than the critical value, I can roughly say that this falls off like this, so that there is a characteristic length, l bar. This characteristic l bar is going to be minus 1 over log of 2dt. And 2dt I can write as 2d tc plus t minus tc. 2 d tc is, by construction, 1. So this is minus 1 over log of 1 plus something like 2d, which is 1 over t minus tc, t minus tc over tc. Now, log of 1 plus a small number-- so if my t goes and approaches tc-- this log will behave like what I have over here. So you can see that this diverges as t minus tc to the minus 1 power. I want it, I guess, to be correct-- tc minus t, because t is less than tc. But the point is that the divergence is linear. As I approach tc, the length of these paths will grow inversely to how close I am. Now what are these paths? I start from the origin, and I randomly take steps. And I've said that the typical steps that I will get will roughly have length l bar. How far have these paths carried the information? These are random walks, so the distance over which they have managed to carry the information, c, is going to be like the square root of the length of these walks. And since the length of the walks grows like t minus tc, this goes like tc minus t to the minus 1/2 power. So the exponent mu of 1/2 that we have been thinking about is none other than the 1/2 that you have for random walks, once you realize that what is going on is that the length of the paths that carry information essentially diverges linearly on approaching this. So that's one understanding. Now, you would say that this is the Gaussian picture. Now I know that when we calculated things to order of epsilon, we found that mu was 1/2 plus something. It became larger. So what does that mean? Well, if you have these paths, and the paths cannot cross each other-- it comes here, it has to go further away, because they are really non phantom-- then they will swell. So the exponent mu that you expect to get will be larger than 1/2. So that's what's captured in here. Well, how can I really try to capture that more mathematically? Well, I say that in the calculations that I did-- let's say when I was calculating the correlation functions sigma 0 sigma r, in the approximation of phantomness, I included all paths that went from 0 to r. Among those there were paths that were crossing themselves. So I really have to subtract from that a path that comes and crosses itself. So I have to subtract that. I also had this condition that I had the numerator and denominator that cancel each other, which really means that I have to subtract the possibility of my path intersecting with another loop that is over here. And we can try to incorporate these as corrections. But we've already done that, because if I Fourier transform this object, I saw that it is this 1 over q squared plus k squared. And then we were calculating these u perturbative corrections, and we had diagrams that kind of looked like this. Oops, I guess I want to first draw the other diagram. And then we had a diagram that was like this. You remember when we were doing these phi to the 4th calculations, the corrections that we had for the propagator, which was related to the two point correlation function, were precisely these diagrams, where we were essentially subtracting factors that were set by u. Of course, the value of u could be anything, and you can see that there is really a one to one correspondence. Any of the diagrams that you had before really captures the picture of one of these paths trying to cross itself that you have to subtract. And you can sort of put a one to one mathematical correspondence between what is going on here. Yeah. AUDIENCE: So why can't we have the path in the first correction you drew? Because aren't we allowed to have four bonds that attach to one site when we're doing the original expansion? PROFESSOR: OK, so I told you at the beginning that you should keep track of all of my mistakes. And that's a very subtle thing. So what you are asking is, in the original Ising model, I can draw perfectly OK a graph such as this that has an intersection such as this. As we will show next time-- so bear with me-- in calculating things while in the phantom condition, this is counted three times as much as it should. So I have to subtract that, because a walk that comes here can either go forward, up, or down. There is some degeneracy there that essentially, this has done an over counting that is important, and I have to correct for when I do things more carefully next time around. Yes. AUDIENCE: When you did the Gaussian model, we never had to put any sort of requirement on the lattice being a square lattice. PROFESSOR: No. AUDIENCE: Didn't we have to do that here when you did those random walks? PROFESSOR: No, I only use the square condition or hypercube condition in order to be able to write this in general dimension. I could very well have done triangular, FCC, or any other lattice. The expression here would have been more complicated. So finally, we can also ask, we have a feel from renormalization group, et cetera, that the Gaussian exponents, like mu equals to 1/2, are, in fact, good provided that you are in sufficiently high dimension-- if you are above four dimensions. Where did you see that occurring in this picture? The answer is as follows. So basically, I have ignored the possibility of intersections. So let's see when that condition is roughly good. The kind of entities that I have as I get closer and closer to tc in the phantom case are these random walks. And we said that the characteristic of a random walk is that if I have something that carries l steps, that the typical size in space that it will grow scales like l to the 1/2. So we can recast this as a dimension. Basically, we are used to say linear objects having a mass that grows-- what do I want to do? Let's say that I have a hypercube of size L. Let's actually call it size R. Then the mass of this, or the number of elements that constitute this object, grow like R to the d. So if I take my random walk, and think of it as something in which every step has unit mass, you would say that the l is proportional to mass so that the radius grows like the number of elements to the 1/2 power or the mass of the 1/2 power. So you would say that the random walk, if I want to force it in terms of a relationship between mass and radius, that mass goes like radius squared. So in that sense, you can say that the random walk has a fractal or Hausdorff dimension of 2. So if you kind of are very, very blind, you would say that this is like a random walk. It's a two dimensional thing. It's a page. So now the question is, if I have two geometrical entities, will they intersect? So if I have a plane and a line in three dimensions, they will barely intersect. In four dimensions, they won't intersect. If I have two surfaces that are two dimensional, in three dimensions, they intersect in a line. In four dimensions, they would intersect in a point. And in five dimensions, they won't generically intersect, like two lines generically don't intersect in three dimensions. So if you ask, how bad is it that I ignored the intersection of objects that are inherently random walking in sufficiently high dimensions, I would say the answer geometrically would be in intersection is generic if d is less than 2 df, which is 4. So we made a very drastic assumption. But as long as we are above four dimensions, it's OK. There's so much space around that statistically, these intersections, this non phantomness, is so entropically constraining that it never happens. You can ignore it, and the results are OK. But you go to four dimensions and below, you can't ignore it, because generically, these things will intersect with each other. That's why these diagrams are going to blow up on you, and give you some important contribution that would swell, and give you a value of mu that is larger than the 1/2 that we have for random walks. So that's the essence of where the Gaussian model was, why we get mu equals to 1/2, why we get mu's that are larger than 1/2, what the meaning of these diagrams is, what four dimensions is special. All of it really just comes down to central limit theorem and knowing that the sum of a large number of variables that has a square root of n type of variance and fluctuations. And it's all captured by that. But we wanted to really solve the model exactly. It turns out that we can make the conditions that were very hard to implement in general dimensions to work out correctly in two dimensions. And so the next lecture will show you what these mistakes are, how to avoid them, and how to get the exact value of this sum in two dimensions. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 20_Continuous_Spins_at_Low_Temperatures_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So we are going to switch directions. Rather than thinking about binary variables, these Ising variables, that were discrete, think again about a lattice. But now at each site we put a spin that has unit magnitude, but m component. That is, Si has components 1, 2, to n. And the constraint that this sum over alpha Si alpha squared is unit. So clearly, if I-- let's put it here explicitly. Alpha of 1 to n, so that's when I look at the case of n equals to 1, essentially I have one component. 2 squared has to be 1, so it's either plus or minus. We recover the Ising variable. For n equals to 2, it's essentially a unit vector who's angle theta, for example, can change in three dimensions if we were exploring the surface of the cube. And we always assume that we have a weight that tends to make our spins to be parallel. So we use, essentially, the same form as the Ising model. We sum over near neighbors. And the interaction, rather than sigma I sigma j, we put it as this Si dot Sj, where these are the dot products of two vectors. Let's call the dimensionless interaction in front K0. So when we want to calculate the partition function, we need to integrate over all configurations of these spins of this weight. Now for each case, we have to do n components. But there is a constraint, which is this one. Now I'm going to be focused on the ground state. So when t equals to 0, we expect that spontaneously the particular configuration will be chosen. Everybody will be aligned to that configuration. Without loss of generality, let's choose aligned state to point along the last component. That is, all of the Si at t equal to 0 will be of the form 0,0, except that the last component is pointing along some particular direction. So if it was two components, the y component would always be 1. It would be aligned along the y direction. Yes, question? AUDIENCE: What dimensionality is the lattice? PROFESSOR: It can be anything. So basically we have two parameters, as usual. n is the dimensionality of spin, and d would be the dimensionality of our lattice. In practice for what the calculations that we are going to be doing, we will be focusing in d that is close to 2. Now if the odd fluctuations at finite t, what happens is that the state of the vector is going to change. So this Si at finite temperature would no longer be pointing along the last component. It will start to have fluctuations. Those fluctuations will change the 0 from the ground state to some value I'll call pi 1, the next one pi 2, all of the big pi n minus 1. And since the whole entire thing is a unit vector, the last component has to shrink to adjust for that. So we would indicate the last component by sigma. So essentially this subspace of fluctuations around the ground state is captured through this vector pi that is n minus 1 dimensional. And this corresponds to the transverse modes that we're looking at when we were doing the expansion of the Landau-Ginzburg model around its symmetry broken state. In this case, the longitudinal mode, essentially, is infinitely stiff. You don't have the ability to stretch along the longitudinal mode because of the constraint that we have put over here. So if you think back, we had this wine bottle, or Mexican hat potential. And the Goldstone modes corresponded to going along the bottom, and how easy it was to climb this Mexican hat was determined by the longitudinal mode. In this case, the Mexican hat has become very, very stiff to climb on the sides. So you don't have the longitudinal mode. You just have these Goldstone modes. The cost to pay for that is that I have to be very careful in calculating the partition function. If I'm integrating over the n components of some particular spin, I have to make sure that I remember that this sum of all of these components is 1. So I have to integrate subject to that constraint. And the way that I have broken things down now, I'm integrating over the n minus 1 component of this vector pi. And this additional direction, d Sigma, but I can't do both of them independently. Because there's a delta function that enforces that sigma squared plus pi squared equals to 1. The pi squared corresponds to the magnitude of this n minus 1 component vector. And essentially, I can solve for this delta function, and really replace this sigma over here with square root of 1 minus pi squared. But I have to be a little bit careful in my integrations. Because this delta function I can write as a delta function of sigma plus or minus square root of 1 minus pi squared. And there is a rule that if I use this delta function to set sigma to be equal to square root of 1 minus pi squared, like I have done over here, I have to be careful that the delta function of a times x is actually a delta function of x divided by modulus of a. So essentially, I have to substitute something here. So this is, in fact, equal to the integration in the pi directions because of the use of this delta function to set the value of sigma, I have to divide by the square root of 1 minus pi squared. Which actually shortly we will write this in the following way. I guess there's an overall factor of 1/2 but it doesn't really matter. So yes? AUDIENCE: So what do you do with the fact that there are two places where the delta function is done? PROFESSOR: I'm continuously connecting to the solution that starts at 0 temperature with a particular state. So I have removed that ambiguity by the starting points of my cube. But if I was integrating over all possibilities, then I should really add that on too. And really just make the partition function with the sum of two equivalent terms-- one around this ground state, one around another state. AUDIENCE: The product is supposed to be for a lattice site-- for the integration variable, not for the-- PROFESSOR: Right. So I did something bad here. So here I should have written-- so this is an n component integration that I have to do on each side. Now let's pick one of the sides. So let's say we pick the side pi. For that, I have a small n integration to do. What does it say? It basically says that if, for example, I am looking at the case of n equals to 2, then I have started with a state that points along this direction. But now I'm allowing fluctuations pi in this direction. And I can't simply say that the amount of these fluctuations, pi, is going let's say from minus infinity to infinity. Because how much pi I have changes actually whether it is small or whether it is large when I'm down here. And so there's constraints let's say on how big pi can be. Pi cannot be larger than 1. And essentially, a particular magnitude of pi-- how much weight does it has, it is captured by this. So that's one thing to remember when we are dealing with integration over unit spins, and we want to look at the fluctuations. The other choice of notation that I would like to do is the following. I said that my starting [INAUDIBLE] is a 0 sum over all nearest neighbors Si dot Sj. Now in the state where all of the spins are pointing in one direction, this factor is unity. So the 0 temperature state gets a factor of 1 here on each one. Let's say we are in a hyper cubic lattice. There are d bonds per site. So at 0 temperature, I would have NdK0, basically-- the value of this ground state. And then if I have fluctuations from that state, I can capture that as follows, as minus K0 over 2. It's a reduction in this energy, as sum over ij Si minus Sj squared. And you can check that. If I square these terms, I'm going to get 1, 1, 1/2, which basically reproduces this, which actually goes over her. And the dot product, minus 2 Si dot Sj, cancels this minus 1/2, basically gives you this exactly. So the reason I write it in this fashion is because very shortly, I want to switch from going and doing things on a lattice to going to a continuum. And you can see that this form, summing over the difference between near neighbors, I very nicely can go to a gradient squared. So essentially that's what I want to do. Whenever I have a sum over a site, I want to replace it with an integral over a space. And I guess to keep things dimensionless, I have to divide by a to get d. So I can call that the density that I have to include, which is also the same thing as the number of lattice points in the body of the box. So my minus beta H in the continuum goes over to whatever the contribution of the completely aligned state is. And then whatever the difference of the spins is, because of the small fluctuations, I will capture through an integration of gradient of S squared. And I call the original coupling that we're basically the strength of the interaction divide by KdK0. Clearly in order to get the coupling that I have in the continuum, I have to have this fact of a to the d. But then in the gradient, I also have to divide by distance. So there's something here, a factor of a to the 2 minus d that relates these two factors. It should be the other way around. It doesn't matter. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: This step is only valid for a cubicle-- PROFESSOR: Yes. So if it was something like a triangular lattice, or something, there would be some miracle factors here. AUDIENCE: But I mean like writing the difference of the spin squared as the gradient squared. Like if it were a triangular lattice? PROFESSOR: Yeah, so the statement is that whatever lattice you have, what am I doing at the level of the lattice, I'm trying to keep things that are close to each other aligned. So when I go to the continuum, how is this captured, it's a term like a gradient squared. Now on the hyper cubic lattices, the relationship between what you put on the bonds of the hyper cubic lattice and what we've got in the continuum is immediately apparent. If you try to do it on the triangular lattice, you still can. And you'll find that at the end of the day, you will get the factor of square root of 3, or something like that. So there's some miracle factor that comes into play. And then at the end of the day, I also want to replace these gradient of a squared naturally in terms of essentially S has n components. N minus one of them are pi, and one of them is sigma. So this would be minus K/2 integral d dx. I have gradient of the pi component, and a gradient of the sigma component squared. So after integrating sigma using the delta functions, I'm going to the continuum limit. The partition function that we have to evaluate up to various non-singular factors, such as this constant over here, is obtained by integrating over all configurations of our pi field, now regarded as a continuously varying object under the dimensional lattice. And the weight, which is as follows, there is a gradient of pi squared, essentially this term over here. There is the gradient the other term. The other term, however, if I use the delta function in square root of 1 minus pi squared squared. And then there's this factor from the integration that I have to be careful of, which I can also take to the exponent, and write as, again, this density times log of 1 minus pi squared. There's, I think, a factor of [INAUDIBLE]. So the weight that I had started with, with S dot S was kind of very simple-looking. But because of the constraints, was hiding a number of conditions. And if we explicitly look at those conditions and ask what is the weight of the fluctuations that I have to put around the ground state, these Goldstone modes, that is captured with this Hamiltonian. Part of it is this old contribution from Goldstone modes, the transfer modes that we had seen. But now being more careful, we see that these Goldstone modes, I have to be careful about integrating over them because of the additional terms that capture, essentially, the original full symmetry, full rotational symmetry, that was present in integration over S. Yes? AUDIENCE: The integration-- the functional integration, pi should be linked up to a sphere of 1. PROFESSOR: This will keep. So I put that constraint over here. And it's not just that it is limited to something, but for a particular value of pi, it gets this additional weight. So if you like, once I try to take my integrals outside that region, that factor says the weight as usual. So this entity is called the non-linear sigma model. And I never understood why they don't call it a non-linear pi model. Because we integrate immediately with sigma. That's how it is. So what you're going to do if we had, essentially, stuff that the first term not included any of the other things, we will have had the analysis of Goldstone modes that we had done previously. The effect of these things, you can see if I start making expansion in powers of pi, is to generate interactions that will be non-linear terms among the parts. So these Goldstone modes that we were previously dealing with as independent modes of the system, are actually non-linearly coupled. And we want to know what the effect of that is on the behavior of the entire system. So whenever we're faced with a non-linear theory, we have to do some kind of a preservative analysis. And the first thing that you may be tempted to do is to expand the powers of pi, and then look at the Gaussian part, and then the higher order parts, etc. That's a way of doing it. But there's actually another way that is more consistent, which is to organize the terms in this weight according to powers of temperature. Because after all, I started with a zero temperature configuration. And I'm hoping that I'm expanding for small fluctuation. So my idea is to-- I know the ground state. I want to see what happens if I go slightly beyond that. And the reason for fluctuations is temperature, so organize terms in this effective Hamiltonian for the pis in powers of temperature. And temperature, by this I mean the inverse of this coupling constant, K, because even, again, if I go through my old derivation, you can see that I go minus beta H, so K0 should be inversely proportional to temperature. K is proportion to K0. It should be inversely proportional to temperature. So to some overall coefficient, let's just define temperature [INAUDIBLE]. Now we see that at the level that we were looking at things before, from this term it's kind of like a Gaussian form, where I have something like K, which is the inverse temperature pi squared. So just on dimensional grounds, up to functional forms, etc. we expect pi squared to be proportional to temperature at the 0 order, if you like. Because, again, if temperature goes to 0, there's not going to no fluctuations. As I go away from 0 temperature, the average fluctuations will be 0. Average squared will be proportional to temperature. It all makes sense. So then if I look at this term, I see that dimensionally, it is inverse temperature pi squared is of the order of temperature. So this is dimensionally t to the 0. Whereas if I start to expand this, this log I can start to expand as minus pi squared plus pi to the 4th over 2 pi to the 6th over 3, and so forth. You can see that subsequent terms in this series are higher and higher order in this temperature. This will be the order of temperature-- temperature squared, temperature cubed. And already we can see that this term is small compared to this term. So although this is a Gaussian term, and I would've maybe been tempted to put it in the 0 order Hamiltonian, If I'm organizing things according to orders of temperature, my 0-th order will remain this. This will be the contribution to first order, 2nd order, 3rd order. And similarly, I can start expanding this. Square root is 1 minus pi squared over 2. So then I take the gradient of minus pi squared over 2. I will get pi gradient of pi. You can see that the lowest order term in this expansion will be pi gradient of pi squared, and then higher order terms. And this is something that is order of pi to the 4th, so it gives the order of temperature squared multiplied by inverse temperatures. So this is a term that is contributing to order of T to 0, T to the 1. So basically, at order of T to 0, I have as my beta H0 just the integral d dx K/2 gradient of pi squared. While at order of T the 1st power, I will have a correction which has two types of terms. One term is this K/2 integral d dx pi gradient of pi squared, coming from what was the gradient of sigma squared. And then from here, I will get a minus rho over 2 integral d dx pi squared. And then there will be other terms like order of T squared, U2, and so forth. So I just re-organized terms in this interacting Hamiltonian in what I expected to be powers of this temperature. Now here we-- one of the first things that we will do is to look at this and realize that we can decompose into modes by going to previous space, I do a Fourier transform. This thing becomes K/2 integral dd q divided by 2 pi to the d q squared pi theta of q squared. So let's write it as pi theta [INAUDIBLE]. And again, as usual, we will end up needing to calculate averages with this Gaussian rate. And what we have here is that pi alpha of q1 pi beta of q2, we get this 0-th ordered rate. The components have to be the same. The sum of the two momenta has to be 0. And if so, I just get K q squared. Now I can similarly Fourier transform the terms that I have over here. So the interactions to one-- first one becomes rather complicated. We saw that when we have something that's is four powers of a field. And when we go to Fourier space, rather than having one integral over x, we ended up with multiple integrals. So I will have, essentially, Fourier transform of four factors of pi. For each one of them I will have an integration. So I will have dd q1, dd q2, dd q3. And the reason I don't have the 4th one is because of the integration over x, forcing the four q's to be added up to 0. So I will have pi alpha of q1, pi alpha of q2, Now note that this high gradient of pi came from a gradient of pi squared, which means that the two pis that go with this, carry the same index. Whereas for the next factor, pi gradient of pi, they came from different ones. So I have pi q3 I beta minus q1 minus q2 minus q3. Now if I just written this, this would've been the Fourier transform of my usual 4th order interaction. But that's not what I have because I have two additional gradients. And so for two of these factors actually I had to take the gradient first. And every time I take a gradient in Fourier space, I will bring a factor of I q. So I will have here I q1 dotted with Iq let's say 3. So the Fourier transform of the leading quartic interaction that I have, is actually the form that I have over here. There is a trivial term that comes from Fourier transforming. It's pi squared because then I Fourier transform that, I get simply pi alpha of q squared. Yes? AUDIENCE: Does it matter which q's you're pulling out as the gradient? PROFESSOR: You can see that these four pis over here in Fourier space appear completely interchangeably. So it really doesn't matter, no. Because by permutation and re-ordering these integration, you can move it into something else. No, there is-- I shouldn't-- I'll draw a diagram that corresponds to that that will make one constraint apparent. So when I was drawing interaction terms for m to the 4th tier for Landau-Ginzburg, and I have something that has 4 interactions, I would draw something that has 2 lines. But the 2 lines had 2 branches. And the branching was supposed to indicate that 2 of them were carrier 1 index, and 2 of them were carrying the same index. Now I have to make sure that I indicate that the branches of these things additionally have these gradients for the Iq's associated with them. And I make a convention the branch, or the q, that has the gradient on it, I will put a line. Now you can see that if I go back and look at the origin of this, that one of the gradients acts on one pair of pis, and the other acts of the other pairs of pis. So the other dashed line I cannot put on the same branch, but I have to put over here. So the one constraint that I have to be careful of is that these Iq' should pick one from alpha and one from beta. This is the diagrammatic presentation. So what I can do is to now start doing perturbation in these interaction. You want to do the lowest order to see what the first correction because of fluctuations and interaction of these Goldstone modes. But rather than do things in two steps, first doing perturbation, encountering difficulty, and then converting things to a normalization group, which we've already seen that happen, that story, in dealing with the Landau-Ginzburg model, Let's immediately do the perturbative renormalization group of this model. So what I'm supposed to do things is to note that all of these theories came from some underlying lattice model. I was carefully drawing for you the first lattice model originally. Which means that there is some cut off here, some lattice cut off. Which means that when I go to Fourier space, there is always some kind of a range of wave numbers or wave vectors that I have to integrate with. So essentially, my pi's are limited after I do a little bit of averaging, if you like that there is some shortest wavelength, and the corresponding largest wave number, lambda, in [INAUDIBLE]. And the procedure for RG, the first one, was to think about all of these pi modes, and brake them into two pieces. One's that we're responding to the short wavelength fluctuations that we want to get rid of, and the ones that correspond to long wavelength fluctuations that we would like to keep. So my task is as follows, that I have to really calculate the partition function over here, which in it's Fourier representation indicates averaging over all modes that's are in this orange. But those modes I'm going to represent as D pi lesser, as well as D pi greater. Each one of these pi's is, of course, an n minus on component vector. And I have a rate that i obtained by substituting pi lesser and pi greater in the expressions that I have up there. And we can see already that the 0-th order terms, as usual, nicely separates out into a contribution that we have for pi lesser, a contribution that we have for pi greater, and that the interaction terms will then involve both of these modes. And in principle, I could proceed and include higher and higher orders. Now I want to get rid of all of the modes that are here. So that I have an effective theory governing the modes that are the longer wavelengths, once I have gotten rid of the short wavelength fluctuations. So formally, once I have integrated over pi greater in this double integral, I will be left with the integration over the pi lesser field. And the exponential gets modified as follows. First of all, if I were to ignore the interactions at the lowest order, the effect of doing the integration of the Gaussian modes that are out here, will, as usual, be a contribution to the free energy of the system coming from the modes that I integrated out. And clearly it also depends, I forgot to say, that the range of integration is now between lambda over b lambda, where b is my renormalization factor. Yes? AUDIENCE: Because you're coming from a lattice, does the particular shape of the Brillouin zone matter more now, or still not really? PROFESSOR: It is in no way different from what we were doing before in the Landau-Ginzburg model. In the Landau-Ginzburg model, I could have also started by putting spins, or whatever degrees of freedom on a lattice. And let's say if I was in hyper cubic lattice, I would've had Brillouin zones, such as this. And the first thing that we always said was that integrating all of these things gives you an additional totally harmless component to the energy that has no similar part in it. So we're always searching for the singularities that arise at the core of this integration. Whatever you do with the boundaries, no matter how complicated shapes they have, they don't matter. So going back to here. If we had ignored the interactions, integrating over pi greater would've giving me this contribution to the free energy. And, of course, beta H0 of pi lesser would've remained. But now the effect of having the interactions, as usual, it is like integrating into the minus u with the rate over here. So I would have an average such as this. And we do the cumulant expansion, as usual. And the first term I would get is the average of this quantity with respect to the Gaussian rate, integrating out the high component modes, high frequency modes, and high order corrections. Yes? AUDIENCE: So right here you're doing two expansions kind of simultaneously. One is you have non-linear model that you're expanding different powers and temperature. And then you further on expand it to cumulants to be able to account for that. PROFESSOR: No, because I can organize this expansion in cumulants in powers of temperature. So this u has an expansion that is u1, u2, etc. organized in powers of temperature. AUDIENCE: OK. PROFESSOR: And then when I take the first cumulant, you can see that the average, the lowest order term, will be-- AUDIENCE: The first cumulant is linear in temperature, and that's what you want? PROFESSOR: Right. So I'm being consistent also with the perturbation that I had originally stated. Actually, since I drew a diagram for the first term, I should state that this term, since we are now also thinking of it as a correction in u1, I have to regard it as 2 factors of pi. So I could potentially represent it by a diagram such as this. So diagrammatically, my u1 that I have to take the average is composed of these two entities. So what I need to do is to take the average of that expression. So I can either do that average over here. Take the average of this expression, or do it diagrammatically. Let us go by the diagrammatic route. So essentially, what I'm doing is that every line that I see over there that corresponds to pi, I am really decomposing into two parts. One of them I will draw as a straight line that corresponds to the pi lesser that I am keeping. Or I replace it with a wavy line, which is the pi greater that I would be averaging over. So the first diagram I had essentially something like this-- actually, the second diagram. The one that comes from rho pi squared. It's actually trivial, so let's go through the possibilities. I can either have both of these to be pi lessers-- sorry, pi greaters. So this is pi greater, pi greater. And when I have to do an average, then I can use the formula that I have in red about the average of 2 pi greaters. And that would essentially amount to closing this thing down. And numerically, it would gives me a factor of minus rho over 2 integral d dK over 2 pi to the d in the interval between lambda over b, lambda . And I have the average of pi alpha pi alpha using a factor of delta alpha alpha. Summing over alpha will give me a factor of n minus 1. And the average would be something like K k squared. So I would have to evaluate something like this. But at the end of the day, I don't care about it. Why don't I care about it? Because clearly the result of doing this is another constant. It doesn't depend on pi lesser. So this is an addition to the free energy once I integrate modes between lambda over b to lambda, there is a contribution to the free energy that comes from this term. It doesn't change the rate that I have to assign to configurations of the pi lesser field. That's another possibility. Another possibility is I have one of them being a pi greater, one of them being a pi lesser. Clearly, when I try to get an average of this form, I have an average of one factor of pi with a Gaussian field that is even. So this is 0. We don't have to worry about it. And finally, I will get a term, which is like this. Which doesn't involve any integrations, and really amounts to taking that term that I have over there, and just making both of those pi to be pi lessers. So it's essentially the same form that will reappear, now the integration being from 0 to lambda over 2. So we know exactly what happens with the term on the right. Nothing useful, or important information emerges from it. If I go and look at this one however, depending on where I choose to put the solid lines or the wavy lines, I will have a number of possibilities. One thing that is clearly going to be there is essentially I put pi lesser for each one of the branches. Essentially, when i write it here like this one, it is reproducing the integration that I have over there, except that, again, it only goes between 0 and lambda. And now I can start adding wavy lines. Any diagram that has one wavy, and I can put the wavy line either on that type of branch, or I can put it on this type of branch. It has only one factor of pi. By symmetry, it will go to 0, like this. There will be things that will have three factors of pi lesser. And all of these-- again because I'm dealing with an odd number of factors of pi greater that I'm averaging will give me 0. There's one other thing that is kind of interesting. I can have all four of these lines wavy. And if I calculate that average, there's a number of ways of contracting these four pi's that will give me nontrivial factors. But these are also contributions to the free energy. They don't depend on the pi's that I'm leaving out. So they don't have to worry about any of these diagrams so far. Now I dealt with the 0, 1, 3, and 4 wavy lines. So I'm left with 2 wavy lines and 2 straight lines. So let's go through those. I could have one branch be wavy lines and one branch be straight lines. And then I take the average of this object. I have a pi greater-- a pi greater here, and therefore I can do an average of two of those pi greaters. That average will give me a factor of 1 over K k squared. I have to integrate over that. But one of these branches had this additional dash thing that corresponds to having a factor of k. So the integral that I have to do involves something like this. And then I integrate over the entirety of the k integration. This is an odd power, and so that will give me a 0 also. So this is also 0. And there's another one that's is like this where I go like this. And although I do the same thing now with two different branches, the k integration is the same. And that vanishes too. So you say, is there anything that is left? The answer is yes. So the things that are left are the following. I can do something like this. Or I can do something like this. So these are the two things that survive and will be nontrivial. You can see that this one will be proportional to pi lesser squared, while this one is going to be proportional to gradient of pi lesser squared. So this one will renormalize, if you like, this coefficient. Whereas this one, we've modified and renormalize our coupling straight. So it turns out that that is really the more important point. But let's calculate the other one too. Yes? AUDIENCE: Why do we connect the ones down here with the loops, but left all the ends free in the ones. Was that just a matter of the case of how to write diagram, or does that signify something? PROFESSOR: Could you repeat that? I'm not sure I understand. AUDIENCE: So when we had two wavy line, both coming out one of the diagram, this line, they just stop. We connected them together when we were writing the ones on the bottom line. PROFESSOR: So basically, I start with an entity that has two solid lines and two wavy lines. And what I'm supposed to do is to do an integration-- an average of this over these pi greaters. Now the process of averaging essentially joins the two branches. If I had the momentum here, q1, and a momentum here, q2, if I had an index here, alpha, and an index here, beta, that process of averaging is equivalent to saying the same momentum has to go through, the same index has to go through. There is no averaging that is being done on the solid lines, so there is-- meaningless to do anything. So this entity means the following. I have K/ 2 let's call it legs 1, 2, 3, and 4. Integral q1 and q2, but q1 and q2 you can see explicitly are solid. So these are the integration from 0 to lambda over b. I have an integration over q3, which is over a wavy line. So it's between lambda over b and lambda. If I call this branch alpha and this branch beta, from here I have actually pi lesser alpha of q1 pi lesser beta of q2. I should have put them outside the integration, but it doesn't matter. And then here I had pi alpha of q3, pi beta of q4. But these also had these lines associated with them. So I have here actually an i q3, an i q4. Again, q4 has to stand for minus q1 minus q2 minus q3 from this. And then I had the pi pi here, which give me, because of the averaging, delta alpha beta. And then I will have an integration that forces q2 plus q4 to be 0. And then I have K q3 squared. Now q3 plus q4 is the same thing as minus q1 minus q3, if you like, because of that constraint. So I can take that outside the integration. There's no problem. I have one integration left, which is 1 over K q3 squared, but these two then become the same. These pi's I will take outside. I note that because of this constraint, q1 and q2 being the same, these two really become one integration that goes between 0 and lambda over b. And these indices have been made to be the same. So I have pi alpha of q, this q, squared. Then I have the integration from lambda over b to lambda. D d q3 2 pi to the d. Here I have i q3 i q4. But q4 was said to be minus q3. So the two i's and the minus cancel each other. And I will get a factor of q3 squared. And then here I have a factor of K q3 squared. So the overall thing is just that we can see that the K's cancel. I have one factor integral dd q 2 pi to the d. I have pi alpha of q squared. And these are q lessers. I integrated out this quantity. The q3's vanish, so I really have the integral of q3 over 2 pi to the d. Now if I had done the integral of q3 2 pi to the d, all the way from 0 to lambda, what would I have done? If I multiply this volume here, that would be the number of modes. So this is, in fact, N/V, which is the quantity that I have called the density. But what I'm doing is, in fact, doing just a fraction of this integral from 0 to lambda over b. So if I do the fraction from 0 to lambda over b, then I will get 1 minus b to the minus d. Sorry, from lambda over b to lambda. Then if I had done all of the way from 0 to lambda, I will have had one, but I'm subtracting this fraction of it. So the answer is rho 1 minus b to the minus d. The overall thing here gets multiplied by rho 1 minus b to the minus d. It just would correct that factor of density that we have. We'll see shortly it's not something to worry about. The next one is really the more interesting thing. So here we have this diagram, which is K/2 integral from 0 to lambda over b. Essentially, I will get the same structure. This time let me write the pi alpha lesser of q1 pi beta lesser of q2 outside the last configuration. I have the integral from lambda over b to lambda dd of q3 2 pi to the 3d. And I will have the same delta function structure, except that now these factors of i q become i q1 times i q2. So I can put them outside already. And then I have here the delta function. So the only difference is that previously the q squared was inside the integration. Now the q squared is outside the integration. So the final answer will be K/2 integral 0 to lambda over d d d q 2 pi to the d. I have a q squared. pi of q lesser squared. And the coefficient of that would look like what I had before, except without this factor. So it's the integral from lambda over b to lambda dd of q3 divided by 2 pi to the d 1 over Kq3 squared. So once I do explicitly this calculation, the answer is going to be a weight that only depends on this pi lesser that I'm keeping and I would indicate that by beta H tilde, as usual. That depends of this pi lesser. And we now have all of the terms that contribute to this beta H tilde. So let's write them down. There's a number of terms that correspond to changes in the free energy. So we have a V delta f p at the 0-th order contribution of delta f p at the first order that are essentially a bunch of diagrams, both from here as well as from here. But we don't really care about them. Then we have types of terms that look like this. I can write them already after Fourier transformation in real space. So let me do that. I have integral dd x in real space, realizing that my cut off has been changed to ba-- things that are proportional to gradient of pi lesser squared. Now gradient of pi lesser squared is the things I in Fourier space becomes q squared pi lesser squared. And what is the coefficient? I had K/2, which comes from here. If I don't do anything, I just have the 0-th order modes acting on these. But I just calculated a correction to that that is something like this. So in addition to what I had before, I have this correction, K/2 times this integral. And I'm going to call the result of this integral to be I sub d-- see that this integral is proportional to 1/K. And when the integration-- let's call the result of the integral Id of b because it depends on both dimension of integration, as well as the factor V through here. So I have 1/K Id of b. So that's one type of that I have generated. I started from this 0-th order form. And I saw that once I make the expansion of this to the lowest order, I will get a correction to that. And actually, just to sort of think about it in terms of formulae, you see what happened was that the first term that I had over here was pi gradient of pi repeated twice. And what I did was I essentially did an average of these two pi's and got the correction to gradient of pi squared, which is what was computed here and [INAUDIBLE]. The next term that I have is this term by itself, not calculated with the pi lessers only-- so this object. So I will write it as K/2 I gradient of pi squared. There's no correction. The final term that I have is this term. So I have minus rho over 2 pi lesser squared. And then the correction that I have was exactly the same form. It was 1 minus 1 minus b to the minus b. The rho over 2 is going to be hung onto both of them. You can see that there's a rho and there's a 2. And so basically I have two add this to that. And to first order, this is the entire thing of the thing that I will get. So this however is just a course gradient. It's the first step of the RG. And it has to be followed by the next two steps of the RG. Where here, you look at you field and you can see that the field is much coarser because the short distance cut off rather than being a, has been switched to ba. So you define your x prime to x/b, so that you shrink. You get the same course pixel size that you had before. And you also have to do a change in pi. So you replace pi lesser with some factor zeta pi prime, so that the contrast will look fine. So once I do that, you can see that this effect of this transformation is that this coupling K will change to K prime. Because of the change of x with b x prime, from the integration, I will get b to the d. From the two derivatives, I will get V to the minus 2. From the fact that I have replace two pi lessers with pi primes, I will get a factor of zeta squared. And then I will have this factor K 1 plus 1/K Id of b. So this is the recursion formula that we will be dealing with. Now there is some subtleties that go with this formula that are worth thinking about. Our original system had really one coupling parameter K that because of the constraints of the full symmetry of this field, S, part of it became the quadratic part that was the free field theory. But part of it made the interactions. But because of this vertical symmetry, to form of that interaction was fixed and had to be proportional to K. Now if we do our normalization group correctly, to full symmetry that we had has to be maintained at all levels. Which means that the functional form that I should end up with should have the same property in that the higher order coefficient should be related to the lower order coefficient, exactly the same way as we had over there. And at least at this stage, it looks like that did not happen. That is, we got the correction to this term, but we didn't get the correction to this term. We shouldn't be worried about that right here because we calculated things consistently corrections to order of T. And this was already a term that was order of T. So the real check is if you go and calculate the next order correction, you better get a correction to this term at next order that matches exactly this. People have done that, and have checked that. And that indeed is the case. So one is consistent with this. There are other kinds of consistency checks that have happened all over the place, like the fact that this came back 1 minus 1 came out to be b to the minus d, so that the density is the same as before, consistent with the fact that you shrunk the lattice after RG so that the pixel size was the same as before is a consequence of that. You may worry that that's not entirely the case because when I do this, I will have also a factor of zeta squared. But it turns out that zeta is 1 plus order of temperature, as we will shortly see. So I gain consistent-- everything's consistent at this level that we've calculated things. And the only change is this factor. Now the one thing that we haven't calculated is what this zeta is. So to calculate zeta, I note the following that I start with a unit vector that is pointing at 0 temperature along this direction. Now because of fluctuations, this is going to be kind of rotating around this. So there is this vector that is rotating. If you average it over some time, what you will see is that the average in all of these direction is 0. The variance is not 0, but the average is 0. But because of those fluctuations, the effective length that you see in this direction has shrunk. How much has it shrunk by is related to this rescaling factor that I should chose. And so it's essentially average of something like 1 minus pi squared. But really it is the pi lesser squares that I'm averaging over. Which the lowest order is 1 minus 1/2 the average of pi lesser squared, which is 1 minus 1/2 . Now this is an N minus 1 component vector. So each one of the components will give you one contribution. The contribution that you get for one of these is simply the average of pi squared, which is 1 over K k squared, which I have to integrate over K's that lie between lambda over b and lambda. And you can see that this is 1 minus 1/2 N and minus 1. It is inversely proportional to K. And then the integration that I have to do is precisely the same integration as here. So it is, again, this Id of b. So let me write the answer, say a couple of words about it, and then we will deal with it next time. So K prime is going to be b to the d minus 2-- a new interaction parameters. It is one factor of 1 plus 1/K Id of b. And then there's one factor of this square of zeta. So that gives you N minus 1 over K Id of b times K. So we'll analyze this more next time around. But I thought I would give you the physical reason for how this interaction parameter changes. Let's say we are in two dimensions. So let's forget about this factor. In two dimensions, we can see that there is one factor that says at finite temperature, we are going to get weaker. The interaction is going to get weaker. And the reason for that is precisely what I was explaining over here. That is, you have some kind of a unit vector, but because of its fluctuations, you will see that it will loop shorter. And it is less likely to be ordered. The more components it has to fluctuate, the shorter it will look like. So there is that term. So if this was the only effect, then K will become weaker and weaker. And it will have disorder. But this effect says that it actually gets stronger because of the interactions that you have among the modes. And to show you this to you, we can experiment with it yourself. So this is a sheet of paper. This bend is an example of a Goldstone mode because I could have rotated this sheet without any cost of energy. So this bend is a Goldstone mode that costs very little energy. Now this paper has the kinds of constraints that we have over here. And because of those constraints, if I make a mode in this direction, I'm not going to be able to bend it in the other directions. So clearly the mode that you have this direction, and this direction, are coupled. That's kind of an example of something like this. Now while it is easy to do this bend because of this coupling, if thermal fluctuations have created modes that are shorter wavelength, and I have already created those modes over here, then you can experiment yourself. You'll see that this is harder to bend compared to this. You can see this already. So that's the effect that you have over here. So it's the competition between these two. And depending on which one wins, and you can see that that depends on whether n is larger than 2 or less than 2. So essentially for n that is larger than 2, you'll find that this term wins. And you get disorder in two dimension. And is less than 2, you will get order like we know the Ising model can be order. But there's other things that can be captured by this expression, that we will look at next time. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 19_Series_Expansions_Part_5.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. OK, let's start. So let's go back to our starting point for the past couple of weeks. The square lattice, let's say, where each side we assign a binary variable sigma, which is minus 1. And a weight that tends to make subsequent nearest neighbors [? things ?] to be parallel to each other. So into the plus k, they are parallel. Penalized into the minus k if they are anti-parallel. And of course, over all pairs of nearest neighbors. And the partition function that is obtained by summing over 2 to the n configurations. Let's make this up to give us a function of the rate of this coupling, which is some energy provided by temperature. And we expect this-- at least, in two and higher dimensions-- to capture a phase transition. And the way that we have been proceeding with this to derive this factor as the hyperbolic cosine of k. 1 plus r variable t, which is the hyperbolic [? sine ?] of k. Sigma i, sigma j. And then this becomes a cos k to the number of bonds, which is 2n on the square lattice. And then expanding these factors, we saw that we could get things that are either 1 from each bond or a factor that was something like t sigma sigma. And then summing over the two values of sigma would give us 0 unless we added another factor of sigma through another bond. And going forth, we had to draw these kinds of diagrams where at each site, I have an even number of 1's selected. Then summing over the sigmas would give me a factor of 2 to the n. And so then I have a sum over a whole bunch of configurations. [? There are ?] [? certainly ?] one. There are configurations that are composed of one way of drawing an object on the lattice such as this one, or objects that correspond to doing two of these loops, and so forth. So that's the expression for the partition function. And what we are really interested is the log of the partition function, which gives us the free energy in terms of thermodynamic quantities that potentially will tell us about phase transition. So here we will get a log 2 to the n. Well actually, you want to divide everything by n so that we get the intensive part. So here we get log 2 hyperbolic cosine squared of k. And then I have to take the log of this expression that includes things that are one loop, disjointed loop, et cetera. And we've seen that the particular loop I can slide all over the place-- so if you have a factor of n-- whereas things that are multiple loops have factors of n squared, et cetera, which are incompatible with this. So it was very tempting for us to do the usual thing and say that the log of a sum that includes these multiple occurrences of the loops is the same thing as sum over the configurations that involve a single loop. And then we have to sum over all shapes of these loops. And each loop will get a factor of t per the number of bonds that are occurring in that. Of course, what we said was that this equality does not hold because if I exponentiate this term, I will generate things where the different loops will coincide with each other, and therefore create types of terms that are not created in the original sum that we had over there. So this sum over phantom loops neglected the condition that these loops, in some sense, have some material to them and don't want to intersect with each other. Nonetheless, it was useful, and we followed up this calculation. So that's repeat what the result of this incorrect calculation is. So we have log of 2 hyperbolic cosine squared k, 1 over n. Then we said the particular way to organize the sum over the loops is to sum over there the length of the loop. So I sum over the length of the loop and count the number of loops that have length l. All of them will be giving me a contribution that is t to the l. So then I said, well, let's, for example, pick a particular point on the lattice. Let's call it r. And I count the number of ways that I can start at r, do a walk of l steps, and end at r again. We saw that for these phantom loops, this w had a very nice structure. It was simply what was telling me about one step raised to the l. This was the Markovian property. There was, of course, an important thing here which said that I could have set the origin of this loop at any point along the loop. So there is an over-counting by a factor of l because the same loop would have been constructed with different points indicated as the origin. And actually, I can go the loop clockwise or anti-clockwise, so there was a factor of 2 because of this degeneracy of going clockwise or anti-clockwise when I perform a walk. And then over here, there's also an implicit sum over this starting point and endpoint. If I always start and end at the origin, then I will get rid of the factor of n. But it is useful to explicitly include this sum over r because then you can explicitly see that sum over r of this object is the trace of that matrix. And I can actually [INAUDIBLE] the order, the trace, and the summation over l. And when that happens, I get log 2 cos squared k exactly as before. And then I have 1 over n. I have sum over r replaced by the trace operation. And then sum over l-- t, T raised to the l divided by l-- is the expansion for minus log of 1 minus t T. And there's the factor of 2 over there that I have to put over here. So note that this plus became minus because of the expansion for log of 1 minus x is minus x minus x squared over 2 minus x cubed over 3, et cetera. And the final step that we did was to note that the trace I can calculate in any basis. And in particular, this matrix t is diagonalized by going to Fourier representation. In the Fourier representation, the trace operation becomes sum over all q values. Sum over all q values, I go to the continuum and write as n integral over q, so the n's cancel. I will get 2 integration over q. These are essentially each one of them, qz and qz, in the interval. Let's say 0 to 2 pi or minus pi to 2 pi, doesn't matter. Interval of size of 2pi. So that's the trace operation. Log of 1 minus t. Then the matrix that represents walking along the lattice represented in Fourier. And so basically at the particular site, we can step to the right or to the left. So that's e to the i qx, e to the minus i qx, e to the i qy, e to the minus i qy. Adding all of those up, you get 2 cosine of qz plus cosine of qy. So that was our expression. And then we realized, interestingly, that whereas this final expression certainly was not the Ising partition function that we were after, that it was, in fact, the partition function of a Gaussian model where at each site I had a variable whose variance was 1, and then I had this kind of coupling rather than with the sigma variable, with these Gaussian variables that will go from minus to infinity. But then we said, OK, we can do better than that. And we said that log z over n actually does equal a very similar sum. It is log 2 hyperbolic cosine squared k, and then I have 1 over n. Sum over all kinds of loops where I have a similar diagram that I draw, but I put a star. And this star implied two things-- that just like before, I draw all kinds of individual loops, but I make sure that my loops never have a step that goes forward and backward. So there was no U-turn. And importantly, there was a factor of minus 1 to the number of times that the walk crossed itself. And we showed that when we incorporate both of these conditions, the can indeed exponentiate this expression and get exactly the same diagrams as we had on the first line, all coming with the correct weights, and none of the diagrams that had multiple occurrences of a bond would occur. So then the question was, how do you calculate this given that we have this dependence on the number of crossings, which offhand may look as if it is something that requires memory? And then we saw that, indeed, just like the previous case, we could write the result as a sum over walks that have a particular length l. Right here, we have the factor of t to the l. Those walks could start and end at the particular point r. But we also specified the direction mu along which you started. So previously I only specified the origin. Now I have to specify the starting point as well as the direction. I have to end at the same point and along the same direction to complete the loop. And these were accomplished by having these factors of walks that are length l. So to do that, we can certainly incorporate this condition of no U-turn in the description of the steps that I take. So for each step, I know where I came from. I just make sure I don't step back, so that's easy. And we found that this minus 1 to the power of nc can be incorporated through a factor of e to the i sum over the changes of the orientation of the walker-- as I step through the lattice-- provided that I included also an additional factor of minus. So that factor of minus I could actually put out front. It's an important factor. And then there's the over-counting, but as before, the walk can go in either one of two directions and can have l starting points. OK, so now we can proceed just as before. log 2 hyperbolic cosine squared k, and then we have a sum over l here, which we can, again, represent as the log of 1 minus t-- this 4 by 4 matrix t star. Going through the log operation will change this sign from minus to plus. I have the 1 over 2n as before. And I have to sum over r and mu, which are the elements that characterize this 4n by 4n matrix. So this amounts to doing the trace log operation. And then taking advantage of the fact that, just as before, Fourier transforms can at least partially block diagonalize this 4n by 4n matrix. I go to that basis, and the trace becomes an integral over q. And then I would have to do the trace of a log of a 4 by 4 matrix. And for that, I use the identity that a trace of log of any matrix is the log of the determinant of that same matrix. And so the thing that I have to integrate is the log of the determinant of a 4 by 4 matrix that captures these steps that I take on the square lattice. And we saw that, for example, going in the horizontal direction would give me a factor of t e to the minus i qx. Going in the vertical direction-- up-- y qy. Going in the horizontal direction-- down. These are the diagonal elements. And then there were off-diagonal elements, so the next turn here was to go and then bend upward. So that gave me in addition to e to the minus i qx, which is the same forward step here. A factor of, let's call it omega so I don't have to write it all over the place. Omega is e to the pi pi over 4. And the next one was a U-turn. The next one was minus t e minus i qx omega star. And we could fill out similarly all of the other places in this 4 by 4 matrix. And then the whole problem comes down to having to evaluate a 4 by 4 determinant, which you can do by hand with a couple of sheets of paper to do your algebra. And I wrote for you the final answer, is 1/2 2 integrals from minus pi 2 pi over qx and qy divided by 2 pi squared. And then the result of this, which is log of 1 plus t squared squared minus 2t 1 minus t squared cosine of qx plus cosine of qy. This was the expression for the partition function. OK, so this is where we ended up last time. Now the question is we have here on the board two expressions, the project one and the incorrect one-- the Gaussian model and the 2-dimensionalizing one. They look surprisingly similar, and that should start to worry us potentially because we expect that many when we have some functional form, that functional form carries within it certain singularities. And you say, well, these two functions, both of them are a double integral of log of something minus something cosine plus cosine. So after all of this work, did we end up with an expression that has the same singular behavior as the Gaussian model? OK, so let's go and look at things more carefully. So in both cases, what I need to do is to integrate A function a that appears inside the log. There is an A for the Gaussian, and then there is this object-- let's call it A star-- for the correct solution. So the thing that I have to integrate is, of course, q. So this is a function of the vector q, as well as the parameter that is a function of which I expect to have a phase transition, which is t. Now where could I potentially get some kind of a singularity? The only place that I can get a singularity is if the argument of the log goes to 0 because log of 0 is something that's derivatives of singularities, et cetera. So you may say, OK, that's where I should be looking at. So where is it this most likely to happen when I'm integrating over q? So basically, I'm integrating over qx and qy over a [INAUDIBLE] [? zone ?] that goes from minus pi to pi in both directions. And potentially somewhere in this I encounter a singularity. Let's come from the site of high temperatures where t is close to 0. Then I have log of 1, no problem. As I go to lower and lower temperatures, the t becomes larger. Then from the 1, I start to subtract more and more with these cosines. And clearly the place that I'm subtracting most is right at the center of q equals to 0. So let's expand this in the vicinity of q goes to 0 in the vicinity of this place. And there what I see is it is 1 minus. Cosines are approximately 1 minus q squared over 2. So I have 1 minus 4t, and then I have plus t qx squared plus qy squared, which is essentially the net q squared. And then I have order of higher power z, qx and qy. Fine. So this part is positive, no problem. I see that this part goes through 0 when I hit tc, which is 1/4. And this we had already seen, that basically this is the place where the exponentially increasing number of walks-- as 4 to the number of steps-- overcomes the exponentially decreasing fidelity of information carried through each walk, which was t to the l. So 4dt being 1, tc is of the order 1/4. We are interested in the singularities in the vicinity of this phase transition, so we additionally go and look at what happens when t approaches tc, but from this side above, because clearly if I go to t that is larger than 1/4, it doesn't make any sense. So t has to be less than 1/4. And so then what I have here is that I can write this as 4tc, so this is 4 times tc minus t. And this to the lowest order I can replace as tc-- q squared plus higher orders. This 4 I can write as 1 over tc. So the whole thing I can write as q squared plus delta t divided by tc, and there's a factor of 4. And delta t I have defined to be tc minus 10. How close I am to the location where this singularity takes places. So what I'm interested is not in the whole form of this function, but only the singularities that it expresses. So I focus on the singular part of this Gaussian expression. I don't have to worry about that term. So I have minus 1/2. I have double integral. The argument of the log, I expanded in the vicinity of the point that I see a singularity to take place. And I'll write the answer as q squared plus 4 delta t over tc. If I am sufficiently close in my integration to the origin so that the expansion in q is acceptable, there is an additional factor of tc. But if I take a log of tc, it's just a constant. I can integrate that out. It's going to not contribute to the singular part, just an additional regular component. Now if I am in the vicinity of q equals to 0 where all of the action is, at this order, the thing that I'm integrating has circular symmetry so that 2-dimensional integration. I can write whether or not it's d qx, d qy as 2 pi q dq divided by 2 pi squared, which was the density of state. And this approximation of a thing being isotropic and circular only holds when I'm sufficiently close to the origin, let's say up to some value that I will call lambda. So I will impose some cut-off here, lambda, which is certainly less than one of the order of pi-- let's say pi over 10. It doesn't matter what it is. As we will see, the singular part, it ultimately does not matter what I put for lambda. But the rest of the integration that I haven't exclusively written down from all of this-- again, in analogy to what we had seen before for the Landau-Ginzburg calculation will give me something that is perfectly analytic. So I have extracted from this expression the singular part. OK, now let's do this integral carefully. So what do I have? I have 1 over 2 pi with minus 1/2, so I have minus 1 over 4 pi. If I call this whole object here x-- so x is q squared plus this something-- then we can see that dx is 2 q dq. So what I have to do is the integral of dx log x, which is x log x minus x. So essentially, let's keep the 2 up there and make this 8 pi. And then what I have is x log x minus x itself, which I can write as log of e. And then this whole thing has to be evaluated between the two limits of integration, lambda and [INAUDIBLE]. Now you explicitly see that if I substitute in this expression for q the upper part of lambda, it will give me something like lambda squared plus delta t-- an expandable and analytical function. Log of a constant plus delta t that I can start analytically expanding. So anything that I get from the other cut-off of the integration is perfectly analytic. I don't have to worry about it. If I'm interested in the singular part, I basically need to evaluate this as it's lower cut-off. So I evaluate it at q equals to 0. Well first off all, I will get a sign change because I'm at the lower cut-off. I will get from here 4 delta t over tc. And from here, I will get log of a bunch of constants. It doesn't really matter. 4 over e delta t over tc. What is the leading singularity? Is delta t log t-- log delta t? And so again, the leading singularity is delta t log of delta t. There's an overall factor of 1 over pi. [? Doesn't match. ?] You take two derivatives. You find that the heat capacity, let's say that is proportion to two derivatives of log z by delta t squared. You take one derivative, it goes like the log. You take another derivative of the log, you find that the singularity is 1 over delta t. That corresponds to a heat capacity divergence with an exponent of unity, which is quite consistent with the generally Gaussian formula that we had in d dimensions, which was 2 minus t over 2. So that's the Gaussian. And of course, this whole theory breaks down for t that is greater than tc. Once I go beyond tc, my expressions just don't make sense. I can't integrate a lot of a negative number. And we understand why that is. That's because we are including all of these loops that can go over each other multiple times. The whole theory does not make sense. So we did this one to death. Will the exact result be any different? So let's carry out the corresponding procedure-- for A star that is a function of q and t. And again, singularities should come from the place where this is most likely to go to 0. You can see it's 1 something minus something. And clearly when the q's are close to 0 is when you subtract most, and you're likely to become negative. So let's expand it around 0. This as q goes to 0 is 1 plus t squared squared. And then I have minus. Each one of the cosines starts at unity, so I will have 4t 1 minus t squared. And then from the qx squared over 2 qy squared over 2, I will get a plus t 1 minus t squared q squared-- qx squared plus qy squared-- plus order of q to the fourth. So the way that we identified the location of the singular part before was to focus on exactly q equals to 0. And what we find is that A star at q equals to 0 is essentially this part. This part I'm going to rewrite the first end slightly. 1 plus t squared squared is the same thing as 1 minus t squared squared plus 4t squared. The difference between the expansion of each one of these two terms is that this has plus 2t squared, this has minus 2t squared, which I have added here. And then minus 4t 1 minus t squared. And the reason that I did that is that you can now see that this term is twice this times this when I take the square. So the whole thing is the same thing as 1 minus t squared minus 2t squared. So the first thing that gives us reassurance happens now, whereas previously for the Gaussian model 1 minus 4t could be both positive and negative, this you can see is always positive. So there is no problem with me not being able to go from one side of the phase transition to another side of the phase transition. This expression will encounter no difficulties. But there is a special point when this thing is 0. So there is a point when 1 minus 2t c plus tc squared is equal to 0. The whole thing goes to 0. And you can figure out where that is. tc is 1. It's a quadratic form. It has two solutions-- 1 minus plus square root of 2. A negative solution is not acceptable. Minus. I have to recast this slightly. Knowing the answer sometimes makes you go too fast. tc squared plus 2tc minus 1 equals to 0. tc is minus 1 plus or minus square root of 2. The minus solution is not acceptable. The plus solution would give me root 2 minus 1. Just to remind you, we calculated a value for the critical point based on duality. So let's just recap that duality argument. We saw that the series that we had calculated, which was an expansion in high temperatures times k, reproduced the expansion that we had for 0 temperature, including islands of minus in a sea of plus, where the contribution of each bond was going from e to the k to e to the minus k. So there was a correspondence between a dual coupling. And the actual coupling, that was like this. At a critical point, we said the two of them have to be the same. And what we had calculated based on that, since the hyperbolic tang I can write in terms of the exponentials was that the value of e to the minus e to the plus 2kc was, in fact, square root of 2 plus 1. And the inverse of this will be square root of 2 minus 1, and that's the same as the tang. That's the same thing as the things that we have already written. So the calculation that we had done before-- obtain this critical temperature based on this Kramers-Wannier duality-- gave us a critical point here, which is precisely the place that we can identify as the origin of the singularity in this expression. So what did we do next? The next thing that we did was having identified up there where the critical point was-- which was at 14-- was to expand our A, the integrand, inside the log in the vicinity of that point. So what I want to do is similarly expand a star of q for t that goes into vicinity of tc. And what do I have to do? So what I can do, let's write it as t is tc plus delta t and make an expansion for delta t small. So I have to make an expansion or delta t small, first of all, of this quantity. So if I make a small change in t, I'll have to take a derivative inside here. So I have minus 2t minus 2, evaluate it at tc, times the change in delta t. So that's the change of this expression if I go slightly away from the point where it is 0. Yes? AUDIENCE: I have a question. T here is [? tangent ?] of k where we calculated the [INAUDIBLE] to [? be as ?] temperature. PROFESSOR: Now all of these things are analytical functions of each other. So the k is, in fact, some unit of energy divided by kt. So Really, the temperature is here. And my t is tang of the above objects, so it's tang of j divided by kt. So the point is that whenever I look at the delta in temperature, I can translate that delta in temperature to a delta int times the value of the derivative at the location of this, which is some finite number. Of basically up to some constant, taking derivatives with respect to temperature, with respect to k, with respect to tang k, with respect to beta. Evaluate it at the finite temperature, which is the location of the critical point. They're all the same up to proportionality constants, and that's why I wrote "proportionality" there. One thing that you have to make sure-- and I spent actually half an hour this morning checking-- is that the signs work out fine. So I didn't want to write an expression for the heat capacity that was negative. So proportionalities aside, that's the one thing that I better have, is that the sign of the heat capacity, that is positive. So that's the expansion of the term that corresponds to q equals to 0. The term that was proportional to q squared, look at what we did in the above expression for the Gaussian model. Since it was lower order, it was already proportional to q squared. We evaluated it at exactly t equals to tc. So I will put here tc 1 minus tc squared q squared, and then I will have high orders. Now fortunately, we have a value for tc that we can substitute in a couple of places here. You can see that this is, in fact, twice tc plus 1, and tc plus 1 is root 2. So this is minus 2 root 2. I square that, and this whole thing becomes 8 delta t squared. This object here, 1 minus tc squared-- you can see if I put the 2 tc on the other side-- 1 minus tc squared' is the same thing as 2 tc. So I can write the whole thing here as 2 tc squared q squared. And the reason I do that is, like before, there's an overall factor that I can take out of this parentheses. And the answer will be q squared now plus 4 delta t over tc squared. It's very similar to what we had before, except that when we had q squared plus 4 delta t over tc, we have q squared plus 4 delta t over tc squared. Now this square was very important for allowing us to go for both positive and negative, but let's see its consequence on the singularity. So now log z of the correct form divided by n-- the singular part-- and we calculate it just as before. First of all, rather than minus 1/2 I have a plus 1/2. I have the same integral, which in the vicinity of the origin is symmetric, so I will write it as 2 pi qd q divided by 4 pi squared. And then I have log. I will forget about this factor for consideration of singularities 4 delta t over tc squared. And now again, it is exactly the same structure as x dx that I had before. So it's the same integral. And what you will find is that I evaluate it as 1/8 pi integral, essentially q squared plus 4 delta t over tc squared, log of q squared plus 4 delta t over tc squared over e evaluated between 0 and lambda. And the only singularity comes from the evaluation that we have at the origin. And so that I will get a factor of minus. So I will get 1/8 pi. Actually then, I substitute this factor. The 4 and the 8 will give me 2. And then I evaluate the log of delta t squared, so that's another factor of 2. So actually, only one factor of pi survives. I will have delta t over tc squared log of, let's say, absolute value of delta t over tc. So the only thing that changed was that whereas I had the linear term sitting in front of the log, now have a quadratic term. But now when I take two derivatives, and now we are sure that taking derivatives does not really matter whether I'm doing it with respect to temperature or delta t or any other variable. You can see that the leading behavior will come taking two derivatives out here and will be proportional to the log. So I will get minus 1 over pi log of delta t over tc. So that if I were to plug the heat capacity of the system, as a function of, let's say, this parameter t-- which is also something that stands for temperature-- but t goes between, say, 0 and 1. It's a hyperbolic tangent. There's a location which is this tc, which is root 2 minus 1. And the singular part of the heat capacity-- there will be some other part of the heat capacity that is regular-- but the singular part we see has a [INAUDIBLE] divergence. And furthermore, you can see that the amplitudes-- so essentially this goes approaching from different sides of the transition, A plus or A minus log of absolute value of delta a. And the ratio of the amplitudes, which we have also said is universal, is equal to 1. And you had anticipated that based on [? duality ?] [INAUDIBLE]. All right, so indeed, there is a different behavior between the two models. The exact solution allows us to go both above and below, has this logarithmic singularity. And this expression was first written down-- well, first published Onsager in 1944. Even couple of years before that, he had written the expression on boards of various conferences, saying that this is the answer, but he didn't publish the paper. The way that he did it is based on the transfer matrix network, as we said. Basically, we can imagine that we have a lattice that is, let's say, I parallel in one direction, l perpendicular in the other direction. And then for the problems that you had to do, the transform matrix for one dimensional model, we can easy to do it for a [? ladder ?]. It's a 4 by 4. For this, it becomes a 2 to the l by 2 to the l matrix. And of course, you are interested in the limit where l goes to infinity so that you can come 2-dimensional. And so he was able to sort of look at this structure of this matrix, recognize that the elements of this matrix could be represented in terms of other matrices that had some interesting algebra. And then he arguably could figure out what the diagonalization looked like in general for arbitrary l, and then calculate log z in terms of the log of the largest eigenvalue. I guess we have to multiply by l parallel. And showed that, indeed, it corresponds to this and has this phase transition. And before this solution, people were not even sure that when you sum an expression such as that for partition function if you ever get a singularity because, again, on the face of it, it's basically a sum of exponential functions. Each one of them is perfectly analytic, sums of analytical functions. It's supposed to be analytical. The whole key lies in the limit of taking, say, l to infinity and n to infinity, and then you'll be able to see these kinds of singularities. And then again, some people thought that the only type of singularities that you will be able to get are the kinds of things that we saw [INAUDIBLE] point. So to see a different type of singularity with a heat capacity that was actually divergent and could explicitly be shown through mathematics was quite an interesting revelation. So the kind of relative importance of that is that after the war-- you can see this is all around the time of World War II-- Casimir wrote to Pauli saying that I have been away from thinking about physics the past few years with the war and all of that. Anything interesting happening in theoretical physics? And Pauli responded, well, not much except that Onsager solved the 2-dimensionalizing model. Of course, the solution that he has is quite obscure. And I don't think many people understand that. Then before, the form that people refer to was presented is actually kind of interesting, because this paper has something about crystal statistics, and then there's the following paper by a different author-- Bruria Kaufman, 1949-- has this same title except it goes from number two or something. So they were clearly talking to each other, but what she was able to show, Bruria Kaufman, was that the structure of these matrices can be simplified much further and can be made to look like spinners that are familiar from other branches of physics. And so this kind of 50-page paper was kind of reduced to something like a 20-page paper. And that's the solution that is reproduced in Wong's book. Chapter 15 of Wong has essentially a reproduction of this. I was looking at this because there aren't really that many women mathematical physicists, so I was kind of looking at her history, and she's quite an unusual person. So it turns out that for a while, she was mathematical assistant to Albert Einstein. She was first married to one of the most well known linguists of the 20th century. And for a while, they were both in Israel in a kibbutz where this important linguist was acting as a chauffeur and driving people around. And then later in life, she married briefly Willis Lamb, of Lamb shift. He turned out the term, 1949. She had done some calculation that if Lamb had paid attention to, he would have also potentially won a Nobel Prize for Mossbauer effect, but at that time didn't pay attention to it so somebody else got there first. So very interesting person. So to my mind, a good project for somebody is to write a biography for this person. It doesn't seem to exist. OK, so then both of these are based on this transfer matrix method. The method that I have given you, which is the graphical solution, was first presented by Kac and Ward in 1952. And it is reproduced in Feynman's book. So Feynman apparently also had one of these crucial steps of the conjecture with his factor of minus 1 to the power of the number of crossings, giving you the correct factor to do the counting. Now it turns out that I also did not prove that statement, so there is a missing mathematical link to make my proof of this expression complete. And that was provided by a mathematician called Sherman, 1960, that essentially shows very rigorously that these factors of minus 1 to the number of crossings will work out and magically make everything happen. Now the question to ask is the following. We expect things to be exactly solvable when they are trivial in some sense. Gaussian model is exactly solvable because there is no interaction among the modes. So why is it that the 2-dimensionalizing model is solvable? And one of the keys to that is a realization of another way of looking at the problem that appeared by Lieb, Mattis, and Schultz in 1964. And so basically what they said is let's take a look at these pictures that I have been drawing for the graphs. And so I have graphs that basically on a kind of coarse level, they look something like this maybe. And what they said was that if we look at the transfer matrix-- the one that Onsager and Bruria Kaufman were looking at. And the reason it was solvable, it looked very much like you had a system of fermions. And then the insight is that if we look at these pictures, you can regard this as a 1-dimensional system of fermions that is evolving in time. And what you are looking at these borderline histories of two particles that are propagating. Here they annihilate each other. Here they annihilate each other. Another pair gets created, et cetera. But in one dimensions, fermions you can regard two ways-- either they cannot occupy the same site or you can say, well, let them occupy in same site, but then I introduce these factors of minus 1 for the exchange of fermions. So when two fermions cross each other in one dimension, their position has been exchanged so you have to put a minus 1 for crossing. And then when you sum over all histories for every crossing, there will be one that touches and goes away. And the sum total of the two of them is 0. So the point is that at the end of the day, this theory is a theory of three fermions. So we have not solved, in fact, an interacting complicated problem. It is. But in the right perspective, it looks like a bunch of fermions that completely non-interacting pass through each other as long as we're willing to put the minus 1 phase that you have for crossings. One last aspect to think about that you learn from this fermionic perspective. Look at this expression. Why did I say that this is a Gaussian model? We saw that we could get this, z Gaussian, by doing an integral essentially over weights that were continuous by i. And the weight I could click here as kij phi i phi j. Because in principle, if I only count an interaction once, let's say I count it twice. I could put a factor of 1/2 if I allow i and j to be some. Especially I said the weight that I have to put here is something like phi i squared over 2. So this essentially you can see z of the Gaussian ultimately always becomes something like z of the Gaussian is going to be 1 over the square root of a determinant of whatever the quadratic form is up there. And when you take the log of the partition function, the square root of the determinant becomes minus 1/2 of the log of the determinant, which is what we've been calculating. So that's obvious. You say this object over here that you write as an answer is also log of some determinant. So can I think of these as some kind of a rock that is prescribed according to these rules on a lattice, give weights to jump according to what I have, then do a kind of Gaussian integration and get the same answer? Well, the difficulty is precisely this 1/2 versus minus 1/2, because when you do the Gaussian integration, you get the determinant in the denominator. So is there a trick to get the determinant in the numerator? And the answer is that people who do have integral formulations for fermionic systems rely on coherent states that involve anti-commuting variables, called Grassmann variables. And the very interesting thing about Grassmann variables is that if I do the analog of the Gaussian integration, the answer-- rather than being in the determinant, in the denominator-- goes into the numerator. And so one can, in fact, rewrite this partition function, sort of working backward, in terms of an integration over Gaussian distributed Grassmannian variables on the lattice, which is also equivalent to another way of thinking about fermions. Let's see. What else is known about this model? So I said that the specific heat singularity is known, so we have this alpha which is 0 log. Given that the structure that we have involves inside the log something that is like this, you won't be surprised that if I think in terms of a correlation length-- so typically q's would be an inverse correlation length, some kind of a q at which I will be reaching appropriate saturation for a delta t-- I will arrive at a correlation range that diverge as delta t to the minus 1. Again, I can write it more precisely as b plus b minus to the minus mu minus. The ratio of the b's this is 1, and the mus are the same and equal to 1. So the correlation length diverges with an exponent 1, and one can [INAUDIBLE] this exactly. One can then also calculate actual correlations at criticality and show that that criticality correlations decay with separation between the points that you are looking at as 1 over r to the 1/4. So the exponent that we call mu is 1/4 in 2 dimensions. Once you have correlations, you certainly know that you can calculate the susceptibility as an integral of the correlation function. And so it's going to be an integral b 2r over r to the 1/4 that is cut off at the correlation length. So that's going to give me c to the power of 2 minus 1/4, which is 7/4. So that's going to diverge as delta t 2 to the minus gamma. Gamma is, again, 7/4. So these things-- correlations-- can be calculated like you saw already for the case of the Gaussian model with a appropriate modification. That is, I have to look at walks that don't come back and close on themselves, but walks that go from one point to another point. So the same types of techniques that is described here will allow you to get to all of these other results. AUDIENCE: Question? PROFESSOR: Yes? AUDIENCE: Do people try to experiment on the builds as things that should be [INAUDIBLE] to the Ising model? PROFESSOR: There are by now many experimental realizations of [INAUDIBLE] Ising model in which these exponents-- actually, the next one that I will tell you have been confirmed very nicely. So there are a number of 2-dimensional absorb, systems, a number of systems of mixtures in 2 dimensions that phase separate. So there's a huge number of experimental realizations. At that time, no, because we're talking about 70 years. So the last thing that I want to mention is, of course, when you go below tc, we expect that-- let's call it temperate-- when I go below temperature, there will be a magnetization. That has always been our signature of symmetry breaking. And so the question is, what is the magnetization? And then this is another interesting story, that around 1950s in a couple of conferences, at the end of somebody's talk, Onsager went to the board and said that he and Bruria Kaufman have found this expression for the magnetization of the system at low temperature as a function of temperature or the coupling constant. But they never wrote the solution down until in 1952, actually CN Yang published a paper that has derived this result. And since this goes to 1 at the critical point-- why duality [INAUDIBLE] 2k [INAUDIBLE] 2k-- [? dual ?] plus 1. This vanishes with an exponent beta, which is 1/8, which is the other exponent in this series. So as I said, there are many people who since the '50s and '60s devoted their life to looking at various generalizations, extensions of the Ising model. There are many people who try to solve it with a finite in 2 dimensions, a finite magnetic field. You can't do that. This magnetization is obtained only at the limit of [INAUDIBLE] going to 0. And clearly, people have thought a lot about doing things in 3 dimensions or higher dimensions without much success. So this is basically the end of the portion that I had to give with discrete models and lattices. And as of next lecture, we will change our perspective one more time. We'll go back to the continuum and we will look at n component models in the low temperature expansion and see what happens over there. Are there any questions? OK, I will give you a preview of what I will be doing in the next few minutes. So we have been looking at these lattice models in the high t limit where expansion was graphical, such as the one that I have over there. But the partition function turned out to be something of two constants, as 1 plus something that involves loops and things that involve multiple loops, et cetera. This was for the Ising model. It turns out that if I go from the Ising model to n component space. So at each side of the lattice, I put something that has n components subject to the condition that it's a unit vector. And I try to calculate the partition function by integrating over all of these unit vectors in n dimension of a weight that is just a generalization of the Ising, except that I would have the dot product of these things. And I start making appropriate high temperature expansions for these models. I will generate a very similar series, except that whenever I see a loop, I have to put a factor of n where n is the number of components. And this we already saw when we were doing Landau-Ginzburg expansions. We saw that the expansions that we had over here could be graphically interpreted as representations of the various terms that we had in the Landau-Ginzburg expansion. And essentially, this factor of n is the one difficulty. You can use the same methods for numerical expansions for all n models that are these n component models. You can't do anything exactly with them. Now the low temperature expansion for Ising like models we saw involved starting with some ground state-- for example, all up-- and then including excitations that were islands of minus in a sea of plus. And so then there higher order in this series. Now for other discrete models, such as the Potts model, you can use the same procedure. And again, you see that in some of the problems that you've had to solve. But this will not work when we come to these continuous spin markets, because for continuous spin models, the ground state would be when everybody is pointing in one direction, but the excitations on this ground state are not islands that are flipped over. They are these long wavelength Goldstone modes, which we described earlier in class. So if we want to make an expansion to look at the low temperature [INAUDIBLE] for systems with n greater than 1, we have to make an expansion involving Goldstone modes and, as we will see, interactions among Goldstone modes. So you can no longer regard at that appropriate level of sophistication as the Goldstone modes maintain independence from each other. And very roughly, the main difference between discrete and these continuous symmetry models is captured as follows. Suppose I have a huge system that is L by L and I impose boundary conditions on one side where all of this spins in one direction, and ask what happens if I impose a different condition on the other side. Well, what would happen is you would have a domain boundary. And the cost of that domain boundary will be proportional to the area of the boundary in d dimensions. So the cost is proportional to some energy per bond times L to the d minus 1. Whereas if I try to do the same thing for a continuous spin system, if I align one side like this, the other side like this, but in-between I can gradually change from one direction to another direction. And the cost would be the gradient, which is 1 over L squared times integrated over the entire system. So the energy cost of this excitation will be some parameter j, 1 over L-- which is the shift-- squared-- which is the [? strain ?] squared-- integrated over the entire volume, so it goes L to the d minus 2. So we can see that these systems are much softer than discrete systems. For discrete systems, thermal fluctuations are sufficient to destroy order at low temperature as soon as this cost is of the order of kt, which for large L can only happen in d [INAUDIBLE] 1 and lower. Whereas for these systems, it happens in d of 2 and lower. So the lower critical dimension for these models. With continuous symmetry, we already saw it's 2. For these models, it is 1. Now we are going to be interested in this class of models. I have told you that in 2 dimensions, they should not order. So presumably, the critical temperature-- if I continuously regard it as a function of dimension, will go to 0 as a function of dimension as d minus 2. So the insight of Polyakov was that maybe we can look at the interactions of these old Goldstone modes and do a systematic low temperature expansion that reaches the phase transition of a critical point systematically ind minus 2. And that's what we will attempt in the future. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 12_Perturbative_Renormalization_Group_Part_4.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So today, I'd like to wrap together and summarize everything that we have been doing the last 10, 12 lectures. So the idea started by saying that you take something like a magnet, you change its temperature. You go from one phase that is paramagnet at some critical temperature Tc to some other phase that is a ferromagnet. Naturally, the direction of magnetism depends whether you put on a magnetic field and went to 0. So there's a lot of these transitions that involve ferromagnet. There were a set of other transitions that involved, for example, superfluids or superconductivity. And the most common example of phase transition being liquid gas, which has a coexistence line that also terminates at the critical point. So you have to turn it around a little bit to get a coexistence line like this. Fine. So there are phase transitions. The interesting thing was that when people did successively better and better experiments, they found that the singularities in the vicinity of these critical points are universal. That is, it doesn't matter whether you have iron, nickel, or some other thing that is undergoing ferromagnetism. You can characterize the divergence of the heat capacity to an exponent alpha, which ferromagnets is minus 0.12. For superfluids, we said that people even take things on satellites to calculate this exponent to much higher accuracy than I have indicated. For the case of liquid gas, there is a true divergence and the exponent is around 0.11. There are a whole set of other exponents that I also mentioned. There is the exponent beta for how the magnetization vanishes. And the values here were 0.37, 0.35, 0.33. There is the divergence of the susceptibility gamma that is characterized through exponents that are 1.39, 1.32, 1.24. And there is a divergence of the correlation length characterized by exponents [INAUDIBLE] 0.71, 0.67, 0.63. So there is this table of pure numbers that don't depend on the property of the material that you are looking at. And the fact that you don't have this material dependence suggests that these pure numbers are some characteristics of the collective behavior that gives rise to what's happening at this critical point. And we should be able to devise some kind of a theory to understand that, maybe extract these nice, pure numbers, which are certainly embedded in the physics of the problem that we are looking at. So the first idea that we explored conceptually due to lambda probably, probably others, is that we should construct the statistical field. That is, what is happening is irrespective of whether we are dealing with nickel or iron, et cetera. So the properties of the microscopic elements should disappear and we should be focusing on the quantity that is undergoing a phase transition. And that quantity, we said, is some kind of a magnetization. And what distinguishes the different systems is that for ferromagnet, it's certainly a three-component system. But in general, for superfluid we would have a phase. And that's a two-component object, so we introduced this parameter n that characterized the symmetry of the order parameter. And in the same way, we said let's look at things that are embedded in space. That is, in general, d dimensional. So our specification of the statistical field was on the basis of these things. And the idea of lambda was to construct a probability for this field, configurations across space. Once we had that, we could calculate a partition function, let's say, by integrating over all configurations of this field of some kind of a weight. And we constructed the weight. We wrote something as beta H was an integral d dx. And then we put a whole bunch of things in here. We said we could have something like t over 2m squared. We had gradient of m squared. Potentially, we could have higher derivatives staying at the order of m squared. And so this list of things that I could put that are all order of m squared is quite extensive. Then I could have things that are fourth order, like u m to the fourth. And we saw that when we performed this renormalization group last time around, that a term that we typically had not paid attention to was generated. Something that was, again, order of m to the fourth, but had a structure maybe like m squared gradient of m squared, or some other form of the two-derivative operator. This is OK. There could be other types of things that have four m's in it. And you could have something that is m to the sixth, m to the eighth, et cetera. So the idea of lambda was to include all kinds of terms that you can put. But actually, we have already constrained terms. So the idea of lambda is all terms consistent with some constraints that you put. What are the constraint that we put? We put locality in that we wrote this as an integral over x. We considered symmetry. So if I am at the 0 field limit, I only have terms that are proportional to m squared, rotationally symmetric. And there is something else that is implicit, which is analyticity. What do I mean? I mean that there is, in some sense here, a space of parameters composed of all of these coefficients-- t, k, u, v. And these are supposed to represent what is happening to my system that I have in mind as I change the temperature. And so in principle, if I do some averaging procedure and arrive at this description, all of these parameters presumably will be functions of temperature. And the statement is that the process of coarse gaining the degrees of freedom and averaging to arrive at this description and the corresponding parameters involves finite number of degrees of freedom. And adding and integrating finite numbers of degrees of freedom can only lead to analytical functions. So the statement here is that this set of parameters are analytical functions. So given this construction by lambda, we should be able to figure out if this is correct, what is happening and where these numbers come from. So what did we attempt? The first thing that we attempted was to do saddle point. And we saw that doing so just looking at the most probable state fails because fluctuations were important. We tried to break the Hamiltonian into a part that was quadratic and Gaussian. And we could calculate everything about it, and then treating everybody else as a perturbation. And when we attempted to do that, we found that perturbation theory failed below four dimensions. So at this stage, it was kind of an impasse in that as far as physics is concerned, we feel that this thing captures all of the properties that you need in order to somehow be able to explain those phenomena. Yet, we don't have the mathematical power to carry out the integrations that are implicit in this. So then the idea was, can we go around it somehow? And so the next set of things that we introduced were basically versions of scaling. And quite a few statistical physicists were involved with that. Names such as [INAUDIBLE], Fisher, and number of others. And the idea is that if we also consider, let's say, introducing a magnetic field direction here and look at the singularities in the plane that involves those two that is necessary to also characterize some of these other things such as gamma, then you have a singular part for the free energy that is a function of how far you go away from Tc. So this t now stands for T minus Tc. And how far you go along the direction that breaks symmetry. And the statement was that all of the results were consistent with the form that depended on really two exponents that could be bonded together into the behavior of the singular part of the free energy or the singular part of the correlation function. And essentially, this approach immediately leads to exponent identities. And these exponent identities we can go back and look at the table of numbers that we have up there. And we see that they are correct and valid. But well, what are the two primary exponents? How can we obtain them? Well, going and looking at this scaling behavior a little bit further, one could trace back to some kind of a self-similarity that should exist right at the critical point. That is, the correlation functions, et cetera at the critical point should have this kind of scaling variance. And then the question is, can we somehow manage to use that property, that looking at things at different scales at the critical points, should give you the same thing to divine what these exponents are. So the next stage in this progression was the work of Kadanoff in introducing the idea of RG. And the idea of RG was to basically average things further. Here, implicit in the calculation that we had was some kind of a short distance [INAUDIBLE] a. And if we average between a and b a, then presumably these parameters mu would change to something else-- mu prime-- that correspond to rescaling by a factor of b. And these mu primes would be a function of the original set of parameters mu. And then Kadanoff's idea was that the scaling variant points would correspond to the points where you have no changed. And that if you then deviated from that point, you would have some characteristic scale in the problem. And you could capture what was happening by looking at essentially how the changes, delta mu prime, were related to the changes delta mu. So essentially, linearizing these relationships. So there would be some kind of a linearized transformation. And then the eigenvalues of this transformation would determine how many relevant quantities you should have. Now, the physics, the entire physics of the process, then comes into play here. That the experiments tell us that you can, let's say, take superfluid and it has this phase transition. You change the pressure of it, it still has that phase transition-- slightly different temperature, but it's the same phase transition. You can add some impurities to it as you did in one of the problems. You still have the phase transition. So basically, the existence of a phase transition as a function of one parameter that is temperature-like is pretty robust. And if we think about that in the language of fixed point, it meant that along the symmetry direction, there should be only one relevant eigenvalue. So this construction that Kadanoff proposed is nice and fine, but one has to demonstrate that, indeed, this infinite number of parameters can be boiled down to a fixed point. And that fixed point has only one relevant direction. So the next step in this progression was Wilson who did perturbative version of this procedure. So the idea was that we can certainly solve beta H0. And beta H0 is really a bunch of Gaussian independent modes as long as we look at things in Fourier space. So in Fourier space, we have a bunch of modes that exist over some [INAUDIBLE] zone. And as long as we are looking at some set of wavelengths and no fluctuation shorter than that wavelength has been allowed, there is a maximum to here. And the procedure of averaging and increasing this minimum wavelength then corresponds to integrating out modes that are sitting outside lambda over v and keeping modes that we call m tilde that live within 0 to lambda over p. And if we integrate these modes-- so if I rewrite this integration as an integration over Fourier modes, do this decomposition, et cetera-- what I find is that my-- after I integrate. So step 1, I do a coarse graining. I find a Hamiltonian that governs the coarse-grained modes. Well, if I integrate out the sigma modes and treat them as Gaussians, then there will be a contribution to the free energy trivially from those modes proportional to volume, presumably. Since these modes and these modes don't couple at the Gaussian level, at the Gaussian level we also have beta H0 acting on these modes that I have kept. And the hard part is, of course, the interaction between these types of modes that are governed by all of these non-linearities that we have over here. And we kind of can formally write that as minus log of e to the minus u that depends on m tilde and sigma when I integrate out in the Gaussian weight the modes that are the sigma parameters. So that's formally correct. This is some complicated function of m tilde after I get rid of the sigma variables. But presumably, if I were to expand and write this in powers of m tilde and powers of gradient, it will reproduce back the original series. Because I said that the original series includes everything that could possibly be generated. This is presumably, after I do all of these things, still consistent with symmetries and so will generate those kinds of terms. But of course to evaluate it, then we have to do perturbation. And so we can start expanding this in powers of u. And the first term would be u, assuming that u is a small quantity. Then minus 1/2 u squared minus u average squared and so forth. And of course, this RG has two other steps. After I have performed this step, I have to do rescaling, which in Fourier space means I blow up q. So q I will replace with b inverse q prime. And renormalize, which meant that in Fourier space I replace m with z m prime. So after I do these procedures, what do I find? I find that to whatever order I go, I start with some original Hamiltonian that includes all terms consistent with symmetries. And I generate a new log of probability. It's not really a Hamiltonian. It's the kind of effective free energy. It's the log of the probability of these configurations. So now I should be able to read this transformation of how I go from mu to mu prime. So let's go through this list and do it. So t prime is something that in Fourier space went with one integration over q. So I got b to the minus d. There's two factors of m, so it's a z square type of contribution. And at the 0 order from here, I have my t. And then when I did the average, from the average of u I got a contribution that was proportional to u. There was a degeneracy of 4. There were two kinds of diagrams that were contributing to it, and then I had the integral from lambda over b to lambda, d dk 2 pi to the d 1 over t plus t plus k k squared. And just to remind you the kind of diagrams that were contributing to this, one of them was something like this and the other one was something like this. This one that had a closed loop gave me the factor of n and the other gave me what was eventually the eighth that I have observed here. So this is what we found at order of u. We went on and calculated the u squared. And the u squared will give me another contribution. There is a coefficient out here that also similarly involves integrals. The integrals will depend on t. They will depend on k. They will depend on this lambda that I'm integrating. They will depend on t, et cetera. So there is some function here. And we argued last time that we don't need to evaluate it, but let's write it down. And let's make sure that it doesn't contribute. But this is only looking at the effect of this u. And I know that I have all of these other terms. So presumably, I will get something that will be of the order of, let's say, v squared uv. I will certainly get something that is of the order of u6. If I think of u6 as something that has six legs associated with it, I can certainly join two of these legs, two of these legs and have something that is two legs leftover, just as I did getting from the u4 m to the 4 2m squared. I certainly can do that. So there is all kinds of higher order terms here in principle that we have to keep track of. Now, the next term in the series is the k. Compared to the t-terms, it had two additional factor of gradients. When we do everything, it turns out that it will be minus 2 because of the two gradients that became q squared. It's still second order in m, so I would get this. And then I start with k. Now, the interesting and important thing is that when we do the calculation at order of u, we don't get any correction to k. The only diagrams that could have contributed to k were diagrams of this variety. But diagrams of this variety, we saw that when I performed those integrals the result just doesn't depend on q. It only corrected the constant term that is proportional to t squared. But that structure will not preserve. If I go to order of u squared, there will be some kind of a correction that is order of u squared. And we kind of had in the table in the table of 6 by 6 things that I had a diagram that gives contribution such as this. That, in fact, look something like this. So basically, this is a four-point vertex, a four-point vertex. I joined them. I make these kinds of calculation. Now, once you do that, you'll find that the difference between this diagram and let's say that diagram is that the momentum that goes in here, q, will have to just go through here and there is no influence on it on the momentum that I'm integrating. Whereas, if you look at this diagram, you will find that it is possible to have a momentum that going in here and gets a contribution over here. And so the calculation, if I were to do at higher order, will have in the denominator a product of two of these factors, but one of them will explicitly depend on q. And if I expanded in powers of q, I will get a correction that will appear here. It will not change our life as we will see what it's good to know that it is there. And there be higher-order corrections here, too. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: Are the functions a1 and a2 just labeled as such to make our lives easier or because they don't have any sort of universality with them? PROFESSOR: Both. They don't carry at this order in the expansion any information for what we will need, but I have to show it to you explicitly. So right now, I keep them as placeholders. If, at the end of the day, we find that our answers will depend on these quantities, then we have to go back and calculate them. But ultimately, the reason that we won't need them is what I described last time. That we will calculate exponents only to lowest order in 4 minus epsilon-- 4 minus d, which is epsilon. And that all of these u's at the fixed point will be of the order of epsilon. So both of these terms are terms that are order of epsilon squared and will be ignorable at level of epsilon. But so far I haven't talked anything about epsilon, so I may as well keep it. And similarly, l prime would be something that goes with q to the fourth if I were to Fourier transform it. So this would be b to the d minus 4. It is still z squared. It will be proportional to l. It will have exactly these kinds of corrections also. And I can keep going with the list of all second-order terms. Now, then we got to u prime. u prime was a fourth order, fourth power of m. So it carried z to the fourth. It involved three derivatives in q space, so it gave me b to the minus 3d. And then to the lowest order when I did the expansion from u, one of the terms that I got was the original potential evaluated for m tilde rather than the original m. So I always will get this term. And then I noticed that when I go and calculate things at second order-- and we explicitly did that-- we got a term that was minus 4 u squared n plus 8 integral lambda over b to lambda d dk 2 pi to the d 1 over t plus k k squared. And presumably, these series also continue. The whole thing squared. It was a squared propagator that was appearing over here. And again, reminding you that these came from diagrams that were-- some of it was like this. There was a loop that gave us the factor of n. And then there were things like this one or this one. Yeah. And those gave out this huge number compared to this. Now, if I had included this term that is proportional to v, presumably I would have gotten corrections that are order of uv. If I go to higher orders, I will certainly get things that are of the order of u6. I will certainly get things that are of the order of u cubed and so forth. So there is a whole bunch of corrections that in principle, if I am supposed to include everything and keep track of everything, I should include. Now, the v itself, v prime, has two additional derivatives with respect to u. So it will be b to the minus 3d minus 2. It's a z to the fourth type of term. It is v, and then it will certainly get corrections at, say, at order of uv and so forth. And what else did I write down in the series? I can write as many as we like. u prime to the 6. This is something that goes with 6 powers of z and will have 5 derivatives. Sorry, 5 integrations in q. So it will give me b to the minus 5d. And again, presumably I will have u6 minus order of something like u squared v and all kinds of things. All right. Yes. AUDIENCE: So in the calculation of terms like k prime and l prime, you have factors like b minus d minus 2. Is it minus 2 or plus 2? PROFESSOR: OK. So these came from Fourier transforming this entity. When I Fourier transform, I get integral dd q. I will have t plus k q squared plus l q to the fourth, et cetera. And my task is that whenever I see q, I replace it with p inverse q prime. So this would be b to the minus d. This would be b to the minus 2. This would be b to the minus 4. So that's how it comes. AUDIENCE: OK. Thank you. PROFESSOR: Anything else? OK. So then we had to choose what this factor of z is. And we said, let's choose it such that k prime is the same as k. But k prime over k we can see is z squared b to the minus d minus 2. If I divide through by this k, then I will get 1, and then something here which is order of u squared. Now, we will justify later why u in order to be small so that I can make a construction that is perturbative in u will be of the order of epsilon. But in any case, if I want to in some sense keep the lowest order in u, at this order I am justified to get rid of this term. And when I do that to this order, I will find that z is b to the 1 plus d over 2. And probably in principle, corrections that will be of the order of this epsilon to the squared. So I do that choice. Secondly, I'll make my b to be infinitesimal. And so that means that mu prime at scale b, the set of parameters-- each parameter would be basically mu plus a small shift d mu by dl. And then I can recast these jumps by factors of b that I have up there to flow equations. And so what do I get? For the first one, we got dt by dl. And I had chosen z squared to be b to the minus d minus 2. So compared to the original one, it's just two more factors of b. Two more factors of b will give me 2t. And then I will have to deal with that integration evaluated when b is very small, which means that I have to just evaluate it on the shell. So I will have 4u n plus 2 kd lambda to the d divided by t plus k lambda squared plus higher-orders terms in this propagator. And then I will have, presumably, some a1-looking quantity, but evaluated on the shell that depends on u squared, and then I will have higher-order terms. Yes. AUDIENCE: I'm kind of curious on why we choose k equals k prime instead of the constant in front of any of the other gradient terms. Why is k equals k prime better than l equals l prime or-- PROFESSOR: OK. We discussed this in the context of the Gaussian model. So what we saw for the Gaussian model is that if I choose l prime to be l, then I will have k prime being b squared k and t prime will be b to the fourth t. So I will have two relevant directions. So I want to have, in some sense, the minimal levels of direction guided by the experimental fact that you do whatever you like and you see the phase transition, except that you have to change one parameter. There could be something else. There could very well-- somebody comes to me later and describes some kind of a phase transition that requires two relevant directions. And the physics of it may guide me to make the other choice. But for the problem that I'm telling you right now, the physics guides me to make this choice. There is no equation for k because we already set that to be 0. Let's write the equation for u. So four u, z to the fourth becomes 4 plus 2d. And then minus 3d becomes 4 minus d u. And then the next order term becomes minus 4 u squared n plus 8 kd lambda to the d t plus k lambda squared and so fourth squared when I evaluate that integral on the shell. And then I will have higher-order terms. So this is where this idea of making an expansion in dimensions come into play. Because we want to have these sets of equations somehow under control, we need to have a small parameter in which we are making an expansion. And ultimately, we will be looking at the fixed point. And the fixed point occurs at u star. That is, of the order of 4 minus d. Otherwise, there is no small control parameter. So the suggestion, actually, that goes to Fisher was to organize the expansion as a power series in this quantity epsilon. And eventually then, ask what the properties of these series are as a function of epsilon. So then I have all those others. I forgot, actually, to write l, by dt by dl. Well, compared to k, it has two additional factor of gradients, which means that it will start with minus 2l. And then we said that it will get corrections that are of the order of u squared, and uv, and such things. dv by dl. I mean, compared to this term, compared to u, it has two more gradients in the construction. So it's dimension will be minus 2 plus epsilon. And then we'll get directions of the order of, presumably, u squared, uv, [INAUDIBLE]. And then, what else did I write? I wrote something about d u6 by dl. d u6 by dl, I have to substitute for z 1 plus d over 2. Subtract 5d. Rewrite d as 4 minus epsilon. Once you do that, you will find that it becomes minus 2 plus 2 epsilon. Let me just make sure that I am not saying something wrong. Yeah, u6 plus order of uv and so forth. So there is this whole set of parameters that are being changed as a function of-- going away-- changing the rescaling by a factor of b is 1 plus an infinitesimal. So this is the flow of parameter in this space. So then to confirm the ideas of Kadanoff, we have to find the fixed point. And there is clearly a fixed point when all of these parameters are 0. If they are 0, nothing changes. And I'm back to the Gaussian model, which is described by just gradient of m squared type of theory. So this is the fixed point that corresponds to t star, u star, l star, v star, all of the things that I can think of, are equal to 0. It's a perfectly good fixed point of the transformation. It doesn't suit us because it actually has still two relevant directions. It's obvious that if I make a small change in u, then in dimensions less than 4, u is a relevant direction and t is a relevant direction. Two directions does not describe the physics that I want. But there is fortunately another fixed point, the one that we call the O n fixed point because it explicitly depends on the parameter n. And what I need to do is to set this equal to 0. And if I set that equal to 0, what do I get? I get u star just manipulating this. It is proportional to epsilon. 1u drops out. It is proportional to epsilon The coefficient has a factor of 1 over 4 n plus 8. Basically, the inverse of this. And then also, the inverse of all of that. So I have t star plus k lambda squared and so forth squared divided by kd lambda to the d. Then, what I need to do is to set the second equation to 0. You can see that this is a term that is order of epsilon squared now. Whereas, this is a term that is order of epsilon. So for calculating the position of the fixed point, I don't need this parameter. And what do I get? I will get that t star is minus 2 n plus 2 kd lambda to the d divided by t star plus k lambda squared and so forth. Times u star. u star is epsilon 4 n plus 8 t star plus k lambda squared and so forth squared. Divided by kd lambda to the d. Again, I'm calculating everything correctly to order of epsilon. So since t star is order of epsilon, I can drop it over here. So my u star is, in fact, epsilon divided by 4 n plus 8. And then this combination k lambda squared and so forth squared divided by kd lambda to the d. And doing the same thing up here, my t star is epsilon n plus 2 divided by 2 n plus 8. It has an overall minus sign. The kd parts cancel. And one of these factors cancel, so I will get k lambda squared squared. Sorry, no square here. And both of these will get corrections that are order of epsilon squared that I haven't calculated. Now, let's make sure that it was justified for me to focus on these two parameters and look at everything else as being not important before. Well, look at these equations. This equation says that if I had a term that was order of u squared evaluated at a fixed point, it would be epsilon squared. So l star would be of the order of epsilon squared. You can check that v star would be of the order of epsilon squared. A lot of those things will be of the order of epsilon squared. v star. And actually, if you look at it carefully, you'll find that things like u6 will be even worse. They would start at order of epsilon cubed and so forth. So quite systematically in this small parameter that Fisher introduced, we see that what has happened is that we have a huge set of parameters, these mu's. But we can focus on the projection in the parameter space t and u. And in that parameter space, we certainly always have the Gaussian fixed point. But as long as I am in dimensions less than 4, the Gaussian fixed point is not only relevant in the t-direction by a factor of 2, but it also is relevant in another direction. There is an eigen-direction that is slightly shifted with respect to t equals to 0. It's not just the u-axis. Along that direction, it moves away. Here you have an eigenvalue of 2. Here you have an eigenvalue of epsilon. So that's the Gaussian fixed point. But now we found another fixed point, which is occurring for some positive u star and some negative u star. This is the o n fixed point. Kind of just by the continuity, you would expect that if things are going into here, it probably makes sense that it should be going like here and this should be a negative eigenvalue. But one can explicitly check that. So basically, the procedure to check that is to do what I told you. I have to construct a linearized matrix that relates delta mu going away from this fixed point to what happens under rescaling. So basically, under rescaling I will find that if I set my mu to be mu star plus a small change delta mu, then it will be moving away. And I can look at how, let's say, delta t changes, how delta u changes, how delta l changes, the whole list of parameters that I have over here. The linearized matrix will relate them to the vector that corresponds to delta t, delta u, blah, blah. So I have to go back to these recursion relations, make small changes in all of the parameters, linearize the result, construct that matrix, and then evaluate the eigenvalues of that matrix. Again, consistency to the order that I have done things. And for example, one of the things that we saw last time is that there will be an element here that corresponds to the change in u if I make a change in delta t. There is such a contribution. If I make a change delta t with respect to the fixed point, I will get a derivative from here. But that derivative multiplies u squared. Evaluated at the fixed point means that I will get a term down here that is order of epsilon squared. And then the second element here, what happens if I make a change in u? Well, I will get a epsilon here. And then I get a subtraction from here. And this subtraction we evaluated last time and it turned out to be epsilon minus 2 epsilon. So the relevance that we had over here became an irrelevance that I wanted. So there is some matrix element in this corner. But since this is 0, as we discussed if I look at this 2 by 2 block, it doesn't affect this eigenvalue. Since I did not evaluate this eigenvalue last time, I'll do it now. So in order to calculate the yt, what I need to do is to see what happens if I change t to t plus delta t. So I have to take a derivative with respect to t. From the first one, I will get 2. From the second term, I will get minus 4 u n plus 2 kd lambda to the d-- what is in the denominator squared. So t plus k lambda squared and so forth squared. But I have to evaluate this at the fixed point. So I put a u star here. I put a u star here. Since u star is already order of epsilon, this order of epsilon term I can ignore. And so what I have here is 2 minus 4 n plus 2 epsi-- OK. Now, let's put u star. u star I have up here. It is epsilon divided by 4 n plus 8. I have k lambda squared and so forth squared. kd lambda to the d. So I substituted the u star. I had the n plus 2. So now I have the kd lambda to the d. And I have this whole thing squared. And you see that all of these things cancel out. And the answer is simply 2 minus n plus 2 epsilon divided by n plus 8. And somehow, I feel that I made a factor of-- no, I think that's fine. Double check. Yep. All right. So let's see what happened. We have identified two fixed points, Gaussian and in dimensions less than 4, the O n. Associated with this are a number of operators that tell me-- or eigen-directions that tell me if I go away from the fixed point, whether I would go back or I would go away. And so for the Gaussian, these just have the names of the parameters that we have to set non-zero. So their names are things like t, u, l, v, u6, and so forth. And the value of these exponents we can actually get without doing anything because what is happening here is just dimensional analysis. So if I simply replace m by something, m prime, gradients or integrations with some power of distance b, I can very easily figure out what the dimensions of these quantities are. They are 2, epsilon, minus 2, minus 2 plus epsilon, minus 2 plus 2 epsilon, and so forth. So this is simple dimensional analysis. And in some sense, these correspond to their dimensions of the theory of the variables that you have. Problem with it as a description of what I see in experiments is the presence of two relevant directions. Now, we found a new fixed point that is under control to order of epsilon. And what we find is that this exponent for what was analogous to t shifted to be 2 minus n plus 2 over n plus 8 epsilon. While the one that was epsilon shifted by minus 2 epsilon and became minus epsilon. What I see is a pattern that essentially all that can happen, since I'm doing a perturbation in epsilon, is that these quantities can at most change by order of epsilon. So this was minus 2 becomes minus 2 plus order of epsilon. This becomes minus 2 plus order of epsilon. It will not necessarily be minus 2 plus epsilon. It could be minus 2 minus 7 epsilon plus 11 epsilon. Maybe even epsilon squared, I don't know. But the point is that clearly, even if I put all the infinity of parameters, as long as I am in 3.999 dimension, at this fixed point I only have one relevant direction. So it does describe the physics that I want, at least in this perturbative sense of the epsilon expansion. And so I have my yt. Actually, in order to get all of the exponents, I really need two. I need yt and maybe yh. But yh is very simple. If I were to add to this magnetic field term, then in Fourier representation it just goes and sits over here at q equals to 0. And when I do all of my rescalings, et cetera, the only thing that happens to it is that it just picks the factor of z. And we've shown that z is b to the 1 plus d over 2 plus order of epsilon squared. And so essentially, we also have our yh. So I can even add it to this table. There is a yh. There is a magnetic field. The corresponding yh is 1 plus d over 2 for the Gaussian. It is 1 plus d over 2 plus order of epsilon squared for the O n model. So now we have everything that we need. We can compute things in principle. We find, first of all, that if I look at the divergence of the correlation length, essentially we saw that under rescaling yt tells us that if our magnetic field is 0, how I get thrown away from the fixed point over here. There is a relevant direction out here that we've discovered whose eigenvalue here is no longer 2. It is 2 minus this formula that we've calculated. And presumably again, if I go and look at my set of parameters, what I have is that in this infinite dimensional space, as I reduce temperature, I will be going from, say, one point here, then lower temperature would be here, lower temperature would be here. So there would be a trajectory as a function of shifting temperature, which at some point that trajectory hits the basin of attraction of this O n fixed point that we found. And then being away from here will have a projection along this axis. And we can relate that to the divergence of, say, the correlation length to the free energy. Our u was 1 over yt. It is the inverse of this object. So let's say this is-- if I divide, I have 1/2 1 minus n plus 2 over 2 n plus 8 epsilon raised to the minus 1 power. And to be consistent, I should really only expand this to the order of epsilon. So I have 1/2 plus 1/4 n plus 2 n plus 8 epsilon. So what does it tell me? Well, it tells me that Gaussian fixed point, the correlation length exponent was 1/2. We already saw that. We see that when we go to this O n model, the correlation length exponent becomes larger than 1/2. I guess that agrees with our table. And I guess we can try to estimate what values we would get if we were to put n equals to 1, n equals to 2, et cetera. So this is n equals to 1. If I put epsilon equals to 1, what do I get for mu? I will get 1/2 plus 1/4 of 3/9. So that's 1/12. So that would give me something like 0.58. All right. Not bad for a low-order expansion coming from 4 to something that's in three dimensions. What happens if I go to n equals to 2? OK, so correction is 4/10 divided by 4. So it's 0.1. So I would get 0.6. What happens if I put n equals to 3? I will get 5 divided by 44. And I believe that gives me something like 61. So it gets worse when I go to larger values of n, but it does capture a trend. Experimentally, we see that mu becomes larger as you go from 1 to 2 or 3-component order parameter. That trend is already captured by this low-order expansion. Once you have mu, you can, for example, calculate alpha. Alpha is 2 minus d mu. So you do 2 minus your d is 4 minus epsilon. Your mu is this 1/2 1 plus 1/8 n plus 2 n plus 8 epsilon. 1/2, sorry. And you do the algebra and I'll write the answer. It is 4 minus n epsilon divided by 2 n plus 8. OK, and let me check. So if I now substitute epsilon equals to 1 for these different values of n, what I get are for alpha 0.17, 0.11, 0.06. I don't know, maybe I have a factor of 2 missing, or whatever. But these numbers, I think, are correct. So you can see that in reality alpha is positive for the liquid gas system n equals to 1. It is more or less 0. This is the logarithmic lambda point for superfluids. And then it becomes negative, clearly, for magnets. The formula that we have predicts all of these numbers to be positive, but it gets the right trend that as you go to larger values of n, the value of the exponent alpha calculated at this order in epsilon expansion becomes lower. So that trend is captured. So at this stage, I guess I would say that the problem that I posed is solved in the same sense that I would say we have solved for the energy levels of the helium atom. Certainly, you can sort of ignore the interaction between electrons and calculate hydrogenic energies. And then you can do perturbation in the strength of the interaction and get corrections to that. So essentially, you know the trends and you know everything. And we have been able to sort of find the physical structure that would give us a root to calculating what these exponents are. We see that the exponents are really a function of dimensionality and the symmetry of the order parameter. All of the trends are captured, but the numerical values, not surprisingly, at this low order have not been captured very well. So presumably, you would need to do the same thing that you would do for the helium atom. You could do to higher and higher order calculations. You could do simulations. You could do all kinds of other things. But the conceptual foundation is basically what we have laid out here. OK, there is one thing that-- well, many things that remain to be answered. One of them is-- well, how do you know that there isn't a fixed point somewhere else? You calculated things perturbatively. The answer is that once you do higher-order calculations, et cetera, you find that your results converge, more or less, better and better to the results of simulations or experiments, et cetera. So there is no evidence from whatever we know that there is need for something that I would call a strong coupling, non-perturbative fixed point. It's not a proof. We can't prove that there isn't such a thing. But there is apparently no need for such a thing to discuss what is observed experimentally. Yes. AUDIENCE: You told us last time that in order for the mu fixed point to make sense, we must have epsilon very small. PROFESSOR: Yes. AUDIENCE: But now we're putting epsilon back to 1. PROFESSOR: OK. So when people inevitably ask me this question, I give them the following two functions of epsilon. One of them is e to the epsilon over 100 and the other is e to the 100 epsilon. Do I know if I put epsilon of 1 a priori whether or not putting epsilon equals to 1 is a good thing for the expansion or not? I don't. And so I don't know whether it is bad and I don't know whether it is good, unless I calculate many more terms in the series and discuss what the convergence of the series is. AUDIENCE: Epsilon is 4 minus d is supposed to be an integer. We just-- PROFESSOR: Oh, you're worried about its integerness as opposed to treating it as a continuum? OK. AUDIENCE: It's OK when you first assume it [INAUDIBLE]. At the end of the day, it must be an integer. PROFESSOR: OK. So the example somebody was asking me also last time that I have in mind is you've learned n factorial to be the product of 1, 2, 3, 4. And you know it for whatever integer that you would like, you do the multiplication. But we've also established that n factorial is an integral 0 to infinity dx x to the n e to the minus x. And so the question is now, can I talk about 4.111 factorial or not? Can I expand 4.11 factorial close to what I have 4 assuming the form of this so-called gamma function? So the gamma function is a function of n that at integer values it falls on whatever we know, but it has a perfect analytic continuation and I can in principle evaluate 4 factorial by expanding around 3 and the derivatives of the gamma function evaluated at 3. AUDIENCE: But these kind of integers don't have anything with dimensionality. PROFESSOR: OK, where did our dimensionality come from? Our dimensionality appears in our expressions because we have to do integrals of this form. And what do we do? We replace this with a surface area, which actually involves the factorial by the way. And then we have k to the d minus 1 dk. So these integrals are functions of dimension that have exactly the same properties as the gamma function and the n factorial. They're perfectly well expandable. And they do have singularities. Actually, it turns out the gamma functions also have singularities at minus 1, things like that. Our functions have singularities at two dimensions and so forth. But the issue of convergence is very important. So let's say that there were some powerful field theories. And in order to do calculations at higher orders, you need to go and do field theory. And you calculate the exponent gamma. And I will write the gamma exponent for the case of n equals to 1. And the series for that is 1 plus-- if we sort of go and do all of our calculations to lowest order, gamma is 2 mu. So it will simply be twice what we have over here. And the first correction is, indeed, 167 times epsilon. The next one is 0.077 epsilon squared. The next one is minus-- problematic-- 049 epsilon cubed. Next one, 180 epsilon to the fourth. Next one-- and I think this is as far as people have calculated things-- epsilon to the fifth. Then, let's put epsilon equals to 1 and see what we get at the various orders. So clearly, I start with 1. Next order I will get 1.167. At the next order, I will get 1.244. It's getting there, huh? And the next order I will get 1.195. Then I will get 1.375. And then I will get 0.96. [LAUGHTER] So this is the signature of what is called an asymptotic series, something that as you evaluate more therms gets closer to the expected result, but then starts to move away and oscillate. Yet, there are tricks. And if you know your tricks, you can put epsilon equals to 1 in that series and do clever enough terms to get 1.2385 minus plus 0.0025. So the trick is called Borel summation. So one can show that if you go to high orders in this series, asymptotically the terms in the series scale like, say, the p-th term in the series will scale as p factorial, something like a to the power of p and some coefficient in front. So if I write the general term in this series f sub p epsilon to the p, my statement is that the magnitude of f sub p asymptotically for p much larger than 1 going to infinity has this form. So clearly, because of this p factorial, this is growing too rapidly. But what you can do is you can rewrite this series, which is sum over p f of p epsilon of p using this integral that I had over here for p factorial. So I multiply and divide by p factorial. So it becomes a sum over p f of p epsilon of p integral 0 to infinity dx x to the p e to the minus x divided by p factorial. And the fp over p factorial gets rid of this factor. So then you can recast this as the integral from 0 to infinity dx e to the minus x. And what you do is you sum the series f of p divided by p factorial epsilon x raised to the power of p. And it turns out that this is called some kind of a Borel function corresponding to this series. And as long as the terms in your series only diverge this badly, people can make sense of this function, Borel function. And then you perform the integration, and then you come up with this number. So that's one thing to note. The other thing to note is I said that what I want to do for my perturbation theory to make sense is for this u star to be small. And I said that the knob that we have is for epsilon to be small. But there is, if you look at that expression, another knob. I can make n going to infinity. So if n becomes very large-- so that also can make the thing small. So there is an alternative expansion. Rather than going with epsilon going to 0, you go to what is called a spherical model. That is, an infinite number of components, and then do in expansion in 1 over n. And so then you basically-- what you are interested is things that are happening as a function of d and n. And you have-- above 4, you know that you are in the Gaussian world. At n goes to infinity, you have this O n type of models. And you find that these models actually only make sense in dimensions that are larger than 2. So you can then perturbatively come either from here or you can come from here and you try to get to the exponents you are interested over here or over here. So basically, that's the story. And for this work, I think, as I said, Wilson did this pertubative RG. Michael Fisher was the person who focused it into an epsilon expansion. And in 1982, Wilson got the Nobel Prize for the work. Potentially, it could have been also awarded to Fisher and Kadanoff for their contributions to this whole story. So that's the end of this part of the course. And now that we have established this background, we will try to get the exponents and the statistical behavior by a number of other perspectives. So basically, this was a perspective and a route that gave an answer. And hopefully, we'll be able to complement it with other ways of looking at the story. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 22_Continuous_Spins_at_Low_Temperatures_Part_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So, we've moved onto the two-dimensional xy model. This is a system where, let's say on each side of a square lattice you put a unit vector that has two components and hence, can be described by an angle theta. So basically, at each side, you have an angle theta for that side. And there is a tendency for a neighboring spins to be aligned and the partition function can be written as a sum over all configurations, which is equivalent to integrating over all of these angles. And the weight that wants to make the two neighboring spins to be parallel to each other. So we have a sum over nearest neighbors. And the dot product of the two spins amounts to looking at the cosine of theta i minus theta j. And so, we have this factor over here. Now, if we go to the limit where k is large, then the cosine will tend to keep the angles close to each other. And we are tempted to expand this around the configurations where everybody's parallel. Let's call that nk by the factor of 2. And then, expanding with the cosine to the next order, you may want to replace this-- let's call this k0-- and have a factor of k, which is proportional to k0 after some lattice spacing. And integral of gradient of theta squared. So basically, the difference between the angles in the continuum version, I want to replace with the term that tries to make the gradient to be fixed. OK. Now the reason I put these quotes around the gradient is something that we noticed last time, which is that in principal, theta is defined up to a multiple of 2 pi. So that if I were to take a circuit along the lattice that comes back to itself. And all along, this circuit integrate this gradient of theta, so basically, gradient of theta would be a vector. I integrated along a circuit. And by the time I have come back and close the circuit to where I started, the answer may not come back to 0. It may be any integer multiple of 2 pi. All right. So how do we account for this? The way we account for this is that we note that this gradient of data I can decompose into two parts. One, where I just write it as a gradient of some regular function. And the characteristic of gradient is that once you go over a closed loop and you integrate, you essentially are evaluating this field phi at the beginning and the end. And for any regular single valued phi, this would come back to zero. And to take care of this fact the result does not have to come to zero if I integrate this gradient of theta, I introduce another field, u, that takes care of these topological defects. OK? So that, really, I have to include both configurations in order to correctly capture the original model that had these angles. OK, so what can this u be? We already looked at what u is for the case of one topological defect. And the idea here was that maybe I had a configuration where around a particular center, let's say all of the spins were flowing out, or some other such configuration such that when I go over a large distance r from this center, and integrate this field u, just like I did over there, the answer is going to be, let's say, 2 pi n. So there's this u. And I integrated along this circle. And the answer is going to be 2 pi n. Well, clearly, the magnitude of u times 2 pi r, which is the radius of the circle is going to be 2 pi times some integer-- could be plus, minus 1, plus minus 2. And so, the magnitude of u is n over r. The direction of u is orthogonal to the direction of r. And how can I show that? Well, one way I can show that is I can say that it is z hat crossed with r hat, there z hat is the vector that comes out of the plane. And r hat is the unit vector in this direction. u is clearly orthogonal to r. The direction of the gradient of this angle is orthogonal to r. It is in the plane so it's orthogonal to this. And this I can also write as z hat 3 crossed with the gradient of log of r, with some cut-off. Because the gradient of log of r will give me, essentially, 1 over R in the direction of r hat. And this is like the potential that I would have for a charge in two dimensions, except that I have rotated it by somewhat. And this I can also write as minus the curl of z hat log r over a with a factor of n. And essentially, what you can see is that the gradient of data for a field that has this topological defect has a part can be written as a potential gradient of some y, and a part that can be written as curl to keep track of these vortices, if you like. If you were to think of this gradient of data like the flow field that you would have in two dimensions. It has a potential part. And it has a part that is due to curvatures and vortices, which is what we have over here. OK. So this is, however, only for one topological defect. What happens if I have many such defects? What I can do is, rather than having just one of them, I could have another topological defect here, another one here, another one there. There should be a combination of these things. And what I can do in order to get the corresponding u is to superimpose solutions that correspond to single ones. As you can see that this is very much like the potential that I would have for a charge at the origin, and then taking the derivative to create the field. And you know that as long as things are linear, and there aren't too many of them, you can superimpose solutions for different charges. You could just add up the electric fields. So what I'm claiming is that I can write u as minus n curl of z hat, times some potential u of r, where psi of r is essentially the generalization of this log. I can write it as a sum over all topological defects. And I will have the n of that topological defect times log of r minus ri divided by a, where ri are the locations of these. So there could be a vortex here at r1 with charge n1, and other topological defect here at r2 with charge n2, and so forth. And I can construct a potential that basically looks at the log of r minus ri for each individual one, and then do this. OK? I will sometimes write this in a slightly different fashion. Recall that we had the Coulomb potential, which was related to log by just a factor of 1 over 2 pi. So the correct version of defining the Coulomb potential is this. So this I can write as the Coulomb potential, provided that I multiply the 2 pi ni. And I sometimes will call that qi. So essentially, qi is 2 pi and i is the charge of the topological defect. It can be plus or minus 2 pi. And then, the potential is constructed by constructing superposition of those charges divided or multiplied with appropriate Coulomb potential. OK? All right. So I can construct a cost for creating a configuration now. Previously, I had this integral gradient of theta squared in the continuum. And my gradient of theta squared has now a part that is the gradient of a regular, well-behaved potential, and a part that is this field u, which is minus-- oops. Then I don't need the ni's because I put the ni as part of the psi. Curl of z hat psi of r. So phi is a regular function. Psi with curl will give me the contribution of the topological defect involves both the charges and the positions of these topological defects. OK? And this whole thing has to be squared, of course. This is my gradient squared. And if I expand this, I will have three terms. I have a gradient of this 5 squared. I have a term, which is minus 2 gradient of phi dot producted with curl of z hat psi. And I have a term that is curl of z hat psi squared. OK? Again, if you think of this as vector, this is a vector whose components are the xy and the y phi, whereas this is a vector whose components are, let's say, dy psi minus dx psi. Because of the curl operation-- the x and y components-- one of them gets a minus sign. Maybe I got the minus wrong, but it's essentially that structure. Now you can see that if I were to do the integration here, there is a dx psi, dy psi. I can do that integration by parts and have, let's say phi dx dy psi. And then, I can do the same integration by parts here. And I will have phi minus dx dy psi. So if I do integration by part, this will disappear. Another way of seeing that is that the gradient will act on the curl. And the gradient of the curl of a vector is 0, or otherwise, the curl will act on the gradient with [INAUDIBLE]. So basically, this term does not contribute. And the contribution of this part and the part from topological defects are decoupled from each other. So there's essentially the Gaussian type of stuff that we calculated before is here. On top of that, there is this part that is due to these topological defects. Again, this vector is this squared. You can see that if I square it, I will get dy psi squared plus dx psi squared. So to all intents and purposes, this thing is the same thing as a gradient of psi squared. Essentially, gradient of psi and curl of psi are the same vector, just rotated by 90 degrees. Integrating the square of one over the whole space is the same as integrating the square of the other. OK? So now, let's calculate this contribution and what it is. Integral d2 x gradient of psi squared with K over 2 out front-- actually you already know that. Because psi we see is the potential due to a bunch of charges. So this is essentially the electric field due to this combination of charges integrated over the entire space. It's the electrostatic image. But let's go through that step by step. Let's do the integration by parts. So this becomes minus k over 2. Integral d2 x psi the gradient acting on this will give me Laplacian of psi. Of course, whenever I do integration by parts, I have to worry about boundary terms. And essentially, if you think of what you will be seeing at the boundary, far, far away from there all of these charges are, let's say. Essentially, you will see the electric field due to the combination of all of those charges. So for a single one, I will have a large electric field that will go as 1 over r. And we saw that integrating that will give me the log. So that was not particularly nice. So similarly, what these boundary terms would amount to would give you some kind of a logarithmic energy that depends on the next charge that you have enclosed. And you can get rid of it by setting the next charge to be zero. So essentially, any configuration in which the sum total of our topological charges is non-zero will get a huge energy cost as we go to large distances from the self-energy, if you like, of creating this huge monopole. So we are going to use this condition and focus only on configurations that are charged topological charge neutral. OK. Now our psi is what I have over here. It is sum over i qi. This Coulomb interaction-- r minus ri. And therefore, Laplacian of psi is essentially taking the Laplacian of this expression is the sum over j qu. Laplacian of this is the delta function. So basically, that was the condition for the Coulomb potential. Or alternatively, you take 2 derivative of the log, and you will generate the delta function. OK? So, what you see is that you generate the following. You will get a minus k over 2 sum over pairs i and j, qi, qj. And then I have the integral over x or r-- they're basically the same thing. Maybe I should have written this as x. And the delta function insures that x is set to be the other i. So I will get the Coulomb interaction between ri minus rj. So basically, what you have is that these topological defects that are characterized by these integers n, or by the charges 2 pi n, have exactly this logarithmic Coulomb interaction. in two dimensions. And as I said, this thing is none other than the electrostatic energy. The electrostatic energy you can write either as an integral of the electric field squared. Or you can write as the interaction among the charges that give rise to that electric field. OK? So, what I can do is I can write this as follows. First of all, I can maybe re-cast it in terms of the n's. So I will have 2 pi ni, 2 pi nj. So I will get minus 4pi squared k. There's a factor of one-half. But this is a sum over i and j-- so every pair is now counted twice. So I get rid of that factor of one-half by essentially counting each pair only once. So I have the Coulomb interaction between ri and rj, which is this 1 over 2 pi log of ri minus rj with some cut-off. And then, there's the term that corresponds to i equals 2j. So I will have a minus, let's say, 4pi squared k sum over i. And I forgot here to put n, i, and j. I will have ni squared. The Coulomb interaction at zero. ri equals [INAUDIBLE]. Now clearly, this expression does not make sense. What it is trying to tell me is that there is a cost to creating one of these topological charges. And all of this theory-- again, in order to make sense, we should remember to put some kind of a short distance cut-off a. All right? And basically, replacing this original discrete lattice with a continuum will only work as long as I keep in mind that I cannot regard things at the level of lattice spacing, and replace it by that formula, as we saw, for example, here. If I want to draw a topological defect, I would need right at the center to do something like this-- where replacing the cosines with the gradient squared kind of doesn't make sense. So basically, what this theory is telling me is that once you get to a very small distance, you have to keep track of the existence of some underlying lattice and the corresponding things. And what's really this is describing for you is the core energy of creating a defect that has object ni. OK. What do I mean by that is that over here, I can calculate what the partition function is for one defect. This we already did last time around. And for that, I can integrate out this energy that I have for the distortions. It's an integral of n over r squared. And this integration gave me this factor of e to the minus pi k log of r over a-- actually, that was 2 pi k. Actually, let's do this correctly once. I should have done it earlier, and I forgot. So you have one defect. And we saw that for one defect, the field at the distance r has n over r in magnitude. And then, the net energy cost for one of these defects-- if I say that I believe this formula starting from a distance a is the k over 2 integral from a. Let's say all the way up to the size of my system. I have 2 pi r dr from a shell at radius r magnitude of this u squared. So I have n squared over r squared. But then, I have to worry about all of the actual things that I have off the distance a. So on top of this, there is a core energy for creating this object that certainly explicitly depends on where I sit this parameter a. OK? This part is easy. It simply gives me pi km squared. And then I have the integral of 1 over r, which gives me log of L over a. So if I want to imagine what the partition function of this is-- one defect in a system of size L-- I would say that z of one defect is Boltzmann weight responding to creating this entity. So I have e to the minus pi k m squared log of L over a. And then I have the core energy that corresponds to this. And then, as we discussed, I can place this anywhere in the system if I'm calculating the partition function. So there's an integration over the position of this that is implicit. And so, that's going to give me the square of the size of my system, except that I am unsure as to where I have placed things up to this cut-off a. So really, the number of distinct positions that I have scales like L over a squared. So the whole thing we can see scales like L over a 2 minus pi km squared. And then, there is this factor of e to the minus this core energy evaluated at the distance a that I will call y. Because again, in some sense, there's some arbitrariness in where I choose a. So this y would be a function of a, if we depend on that choice. But the most important thing is that if I have a huge system, whether or not this partition function, as a function of the size of the system, goes to infinity or goes to 0 is controlled by this exponent 2 minus pi k. Let's say we focus on the simplest of topological defects corresponding to n equal 2 minus plus 1. You expect that there is some potentially critical value of k, which is 2 over pi, that distinguishes the two types of behavior. OK? But this picture is nice, but certainly incomplete. Because who said that there's any legitimacy in calculating the partition function that corresponds to just a single topological defect. If I integrate over all the configurations of my angle field, I should really be doing something that is analogous to this and calculating a partition function that corresponds to many defects. And actually what I calculated over here was in some sense the configuration of spins given that there is a topological defects that has the lowest energy. Once I start with this configuration-- let's say, with everybody radiating out-- I can start to distort them a little bit which amounts to adding this gradient of phi to that. So really, the partition function that I want to calculate and wrote down at the beginning-- if I want to calculate correctly, I have to include both these fluctuations and these fluctuations corresponding to an arbitrary set of these topological defects. And what we see is that actually, the partition functions and the energy costs of the two components really separate out. And what we are trying to calculate is the contribution that is due to the topological defects. And what we see is that once I tell you where the topological defects are located, the partition function for them has an energy component that is this Coulomb interaction among the defect. But there is a part that really is a remnant of this core energy that we were calculating before. So when I was sort of following my nose here, I had forgotten a little bit about the short distance cut-off. And then, when I encountered this C of zero, it told me that I have to think about the limit when two things come close to each other. And I know that that limit is constrained by my original lattice, and more importantly, by the place where I am willing to do self-averaging and replace this sum with a gradient. OK, so basically, this is the explanation of this term. So the only thing that we have established so far is that this partition function that I wrote down at the beginning gets decomposed into a part that we have calculated before, which was the Gaussian term, and is caused, really, the contribution due to spin waves. So this is when we just consider these [INAUDIBLE] modes, we said that essentially you can have an energy cost that is the gradient squared. So this is the part that corresponds to integral d phi into the minus k over 2 integral d2 x gradient of phi squared, where phi is a well-behaved, ordinary function. And what we find is that the actual partition function also has the contribution from the topological defects. And that I will indicate by ZQ. And Q stands for Coulomb gas. Because this partition function Z sub Q is like I'm trying to calculate this system of degrees of freedom that are characterized by charges n that can be anywhere in this two-dimensional space. And the interaction between them is governed by the Coulomb interaction in two dimensions. So to calculate this, I have to sum over all configuration of charges. The number of these charges could be zero, could be two, could be four, could be six, could be any number. But I say even numbers because I want to maintain the constraint of neutrality. Sum over ni should be zero. So I want to do that constrained sum. So I only want to look at neutral configurations. Once I have specified-- let's say that I have eight charges-- four plus and four minus-- well, there is a term that is going to come from here and I kind of said that the exponential of this term I'm going to call y. So I have essentially y raised to the power of the number of charges. Let's call this sum over i ni squared. And I'm actually just going to constrain ni to be minus plus 1. I'm going to only look at these primary charges. So the sum over i and i squared is just the total number of charges irrespective of whether they are plus or minus. It basically is replacing this. And then, I have to integrate over the positions of these charges. Let's call this total number n. So I have to integrate i1 2n d2 xi the position of where this charge is and then interaction which is exponential of minus 4phi squared k sum over i less than j. And i and j the Coulomb interaction between location let's say xi and xj. Actually, I want to also emphasize that throughout, I have this cut-off. So when I was integrating over one, I said that the number of positions that I had was not L squared, but L over a squared to make it dimensionless. I will similarly make these interactions dimensioned as I divide by a squared. And so basically, this is the more interesting thing that we want to calculate. Also, again, remember I wrote this a squared down here, also to emphasize that within this expression, the minimal separation that I'm going to allow between any pair of charges is off the order of a. I have integrated out or moved into some continuum description any configuration in which the topological charges are less than distance a. OK? Yes? AUDIENCE: Essentially, when we were during [INAUDIBLE] it was canonical potential, [INAUDIBLE], canonical ensemble. And this is more like grand canonical ensemble? PROFESSOR: Yes. So, as far as the original two-dimensional xy model is concerned, I'm calculating a canonical partition function for this spin or angle degrees of freedom. And I find that that integration over spin angle degrees of freedom can be decomposed into a Gaussian part and a part that as you correctly point out corresponds to a grand canonical system of charges. So the number of charges that are going to appear in the system I have not specified whether it is determined implicitly by how strong these parameter was. AUDIENCE: [INAUDIBLE] of canonical potential? PROFESSOR: y plays the role of E to the beta mu. The quantity that in 8333 we were writing as z-- E to the beta mu small z. OK? So, we thought we were solving the xy model. We ended up, indeed, with this grand canonical system, which is currently parametrized by two things. One is this k, which is this strength of the potential. The other is this y. Of course, since this system originally came from an xy model that went only one parameter, I expect this y to also be related to k. But just as an expression, we can certainly regard it as a system that is parametrized by two things-- the k and the y. For the case of the xy model, there will be some additional constraint between the two. But more generally, we can look at this system with its two parameters. And essentially, we will try to make an expansion in y. You'll say that, OK, presumable, I know what is going to happen when y is very, very small. Because then, in the system I will create only a few charges. If I create many charges, I'm going to penalize by more and more factors of y. So maybe through leading order, the system would be free of charge. And then, there would be a few pairs that would appear here and there. In fact, there should be a small density of them, even no matter how small I make y. There will be a very small density of these things that will appear. And presumably, these things will always appear close to each other. So I will have lots and lots of these pairs-- well, not lots and lots of these pairs-- a density of them that is controlled by how big y is. And as I make y larger-- so this is y becoming larger-- then presumably, I will generate more and more of these pairs. And once I have more and more of these pairs, they could, in principle, get into each other's way. And when they get into each other's way, then it's not clear who is paired with whom. And at some point, I should trade my picture of having a gas of pairs of these objects to a plasma of charges, plus and minus, that are moving all over the place. So as I tune this parameter y, I expect my system to go from a low density phase of atoms of plus-minus bound to each other to a high density phase where I have a plasma of plus and minuses moving all over the place. Yes? AUDIENCE: So, y is related to the core energy. PROFESSOR: Yes. AUDIENCE: And core energy is defined through [INAUDIBLE] direction at zero separation-- PROFESSOR: Well, no. Because the Coulomb description is only valid large separations. When I get to short distances, who knows what's going on? So there is some underlying microscopic picture that determines what the core energy is. Very roughly, yes, you would expect it to have a form that is of e to the minus k with some coefficient that comes from adding all of those interactions here. Yes? AUDIENCE: Just based on the sign convention, you're saying if you increase or decrease y, that it will go from a low density-- PROFESSOR: OK, so y is the exponential of something. y equals to zero means I will not create any of these things. y approaching 1-- I will create a lot of them. There's no cost at y equals to one. There's no core energy. I can create them as I want. AUDIENCE: So this would be like y equals minus epsilon c. Is that right? PROFESSOR: Yeah. Didn't I have that? You see in the exponential it is with the minus. OK. But in any case, that is the expectation. Right? So I expect that when I calculate, I create one of these defects. There is an energy cost which is mostly from outside. And then, there's an additional piece on the inside. So the exponential of that additional piece would be a number that is less than 1. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. I mean, the original model has some particular form. And actually, the interactions of the original model, I can make more complicated. I can add the full spin interaction, for example. It doesn't affect the overall form much, just modifies what an effective k is, and what the core energy is independent. OK? All right. But the key point is that this system potentially has a phase transition as you change the parameter of y. And another way of looking at this transition is that what is happening here in different languages, you can either call it insulator or a dielectric. But what is happening here in different languages, you can either call, say, a metal or, as I said, maybe a plasma. The point is that here you have free charges. Here you have bound pairs of charges. And they respond differently to, let's say, an external electromagnetic field. So once we have this picture, let's kind of expand our view. Forget about the xy model. Think of a system of charges. And notice that in this low-density phase, it behaves like a dielectric in the sense that there are no free charges. And here, there will be lots of mobile charges. And it behaves like a metal. What do I mean by that? Well, here, if I, let's say, bring in an external electric field. Or maybe if I put a huge charge, what is going to happen is that opposite charges will accumulate. Or there will be, essentially, opposite charges for the field. So that once you go inside, the fact that you have an external electric field or a charge is completely screen. You won't see it. Whereas here, what is going to happen is that if you put in an electric field, it will penetrate into the system although it will be weakened a little bit by the re-orientation of these charges. Now, if you put a plus charge, the effect of that plus charge would be felt throughout, although weakened a little bit. Because again, some of these dipoles will re-orient in that. OK? So, this low-density phase we can actually try to parametrize in terms of a weakening of the interactions through a dielectric constant epsilon. And so, what I'm going to try to calculate for you is to imagine that I'm in the limit of low density or small y and calculate what the weakening is, what the dielectric function is, perturbatively in y. Yes? AUDIENCE: If you were talking about the real electric charges and the way to act on that [INAUDIBLE] real electric field or charge. But if we are talking about topological charges, what would be kind of conjugate force to that? PROFESSOR: OK. It's not going to be easy. I have to do something about say, re-orienting all of the spins on the boundaries, et cetera. So let's forget about that. The point is that mathematically, the problem is reduced to this system. And I can much more easily do the mathematics if I change my perspective and think about this picture. OK? And that's the thing you have to do in theoretical physics. You basically take advantage of mappings of one model to another model in order to refine your intuition using some other picture. So that's what we are going to do. So completely different picture from the original spin models-- imagine that you have indeed a box of this material. And this box of material has, because you're wise, more some combination of these plus and minus charges in it. And then, what I do is that I bring externally a uniform electric field in this direction. And I expect that once inside the material, the electric field will be reduced to a smaller value that I will call E prime because of the dielectric function. Now, if you ever calculated dielectric functions, that's exactly what I'm going to do now. It's a simple process. What you do, for example, is you do the analog of Gauss' theorem. Let's imagine that we draw a circuit such as this that is partly on the inside, and partly on the outside. So I can calculate what the flux of the electric field is through this circuit, the analog of the Gaussian pillbox. And so, what I have is that what is going on is E. If I call this distance to be L, the flux integrated through the entire thing is E minus E prime times f. So this is the integral of the divergence of the electric field . And by Gauss' theorem, this has to be charge enclosed inside. OK. Now, why should there be any charge enclosed inside when you have a bunch of plus and minuses. I mean, there will be some pluses and minuses out here as I have indicated. There will be some pluses and minuses that are inside. But the net of these would be zero. So the only place that you get a net charge is those dipoles that happen to be sitting right at the boundary. And then, I have to count how many of them are inside. And some of them will have the plus inside. And some of them will have the minus inside. And then, I have to calculate the net. The thing is that my dipoles do not have a fixed size. The size of these plus/minus molecules r can be variable itself. OK? So there will be some that are tightly bound to each other. There may be some that are further apart, et cetera. So let's look at pairs that are at the distance r and ask how many of them hit this boundary so that one of them would be inside, one of them will be outside. OK? So, that number has to be proportional to essentially this area. What is that area? On one side, it is L. On the other side, it is R. But if the dipole is oriented at an angle theta, it is, in fact r cosine theta. OK? So that's the number. Now, what I will have here would be the charge 2 pi. So this is qi. Actually, it could be plus or minus. The reason that there's going to be more plus as opposed to minus is because the dipole gets oriented by the electric field. So I will have a term here that is E to the E prime times q ir-- so that's 2 pi r. So this is qr again, times cosine of theta. So we can see that, depending on cosine of theta being larger than pi or less than pi, this number will be positive or negative. And that's going to be modified by this number also. And of course, the strength of this whole thing is set by this parameter k. And also how likely it is for me to have created a dipole of size r is controlled by precisely this factor. A dipole is something that has two cores. So it is something that will appear at order of y squared. And there is the energy, according to this formula, of separating two things. And so you can see that essentially, n of r-- maybe I will write it separately over here-- is y squared times E to the minus 4pi squared k. And from here, I have log of r divided by a. And then there's a factor of 2 pi because the Coulomb potential is this. So, this is going to be y squared a over r to the power of 2 pi k. The further you try to separate these things, the more cost you have to pay. OK. So if you were trying to calculate the contribution of, say, polarizable atoms or dipoles to the dielectric function of a solid, you would be doing exactly this same calculation. The only difference is that the size of your dipole would be set by the size of your molecule and ultimately, related to its polarizability. And rather than having this Coulomb interaction, you would have some dissociation energy or something else, or the density itself would come over here. So, the only final step is that I have to regard my system having a composition of these things of different sizes. So I have to do an integral over r, as well as orientation. So I have to do an integral over E theta. Of course, the integration will go from a through essentially, the size of the system or infinity. I forgot one other thing, which is that when I'm calculating how many places I can put this, again, I have been calculating things per unit area of a squared. So I would have to divide all of these places where r and L appear by corresponding factors of a. OK? So, the last step of the calculation is you expand this quantity. It is 1. For small values of the electric field, it is 2 pi r E prime cosine of theta k plus higher order terms. And then, you can do the various integrations. First of all, 1 the integration against 1 will disappear because you are integrating over all values of cosine of theta. Integral of cosine of theta gives you zero. Essentially, it says that if there was no electric field, there was no reason for there to be an additional net charge on one side or the other. So the first term that will be non-zero is the average of cosine theta squared, which will give you a factor of one-half. And so, what you will get is that E minus E prime times L is-- well, there's going to be a factor of L. The integral of the theta cosine of theta squared-- the integral of cosine of theta squared is going to give you 2 pi, which is the integration times one-half. So this is the integral the theta cosine square theta will give you this. So we did this. We have two factors of y. y is our expansion parameter. We are at a low density limit. We've calculated things assuming that essentially, I have to look at one value of these diplodes. In principal, I can imagine that there will be multiple dipoles. And you can see that ultimately, therefore, potentially, I have order of y to the fourth that I haven't calculated. OK, so we got rid of the y squared. We have a factor of E prime on the expansion here. This factor is bothering me a little bit-- let me check. No, that's correct. OK, so I have the factor of k. I have a factor of 2 pi here that came from the charge. I have another 2 pi here-- so I have 4pi squared. I think I got everything except the integration over a to infinity dr. There is this r dr, which is from the two dimensional integration. There was another r here, and another r here. So this becomes r to the three. From here, I have minus 2 pi k. And then, I have the corresponding factors of a to the power of 2 pi k minus 4. OK. So you can see that the L's cancel. And what I get is that E-- once I take the E prime to the other side-- becomes E prime 1 plus I have 4pi cubed k y squared-- again, y squared is my small expansion parameter. And then, I have the integral from a to infinity, the r, r to the power of 3 minus 2 pi k, a to the power of 2 pi k minus 4, and then order of y squared, y to the fourth. OK. So basically, you see that the internal electric field is smaller than the external electric field by this factor, which takes into account the re-orientation of the dipoles in order to screen the electric field. And it is proportional in some sense to the density of these dipoles. And the twist is that the dipoles that we have can have a range of sizes that we have to integrate [INAUDIBLE]. So typically, you would write E prime to the E over epsilon. And so this is the inverse of your epsilon. And essentially, this is a reduction in everything that has to do with electric interactions because of the screening of other things. I can write it in the following fashion. I can say that there is an effective k-- that's called k effective-- which is different from the original k that I have. It is reduced by a factor of epsilon. So we were worried when we were doing the nonlinear sigma model that for any [INAUDIBLE], we saw that the parameter k was not getting modified because of the interactions among the spin modes, and that's correct. But really, at high temperatures, it should disappear. We saw that the correlations had to go away from power law form to exponential form. And so, we needed some mechanism for reducing the coupling constant. And what we find here is that this topological defects and their screening provide the right mechanism. So the effective k that I have is going to be reduced from the original k by the inverse of this. Since I'm doing an expansion in y, it is simply minus 4pi cubed ky squared, integral a to infinity dr, r to the 3 minus 2 pi k, a to the 2 pi k minus 4, plus order or y to the fourteenth. OK. Now actually, in the lecture notes that I have given you, I calculate this formula in an entirely different way. What I do is I assume that I have two topological defects-- so there I sort of maintain the picture of topological defect. And their interaction between them is this logarithmic interaction that has coefficient k. But then, we say that this [INAUDIBLE] interaction is modified because I can create pairs of topological defect, such as this, that will partially screen the interaction. And in the notes, we calculate what the effect of those pairs at lowest order is on their interaction that you have between them. And you find that the effect is to modify the coefficient of the logarithm, which is k, to a reduced k. And that reduced k is given exactly by this form. So the same thing you can get different ways. Yes? AUDIENCE: What if k is too small-- PROFESSOR: A-ha. Good. Because I framed the entire thing as if I'm doing a preservation theory for you in y being a small parameter. OK? But now, we see that no matter how small y is, if k is in fact less than 2 over pi-- so this has dimensions of r to the 4 minus 2 pi k. So if k is less than 2 over pi-- which incidentally is something that we saw earlier-- if k is less than that, this integral diverges. So I thought I was controlling my expansion by making y arbitrarily small, but what we see is that no matter how small I make y, if k becomes too small, the perturbation acuity blows up on me. So this is yet another example of a singular perturbation theory, which is what we had encountered when we were doing the Landau-Ginzburg model. We thought that our co-efficient of phi to the fourth u was a small parameter. You are making an expansion naively in powers of u. And then we found an expression in which the coeffecient-- the thing that was multiplying u at the critical point was blowing up on us. And so the perturbation theory inherently became singular, despite your thinking that you had a small parameter. So we are going to use the same trick that we used for the case of the Landau-Ginzburg model-- this is deal with singular perturbations by renormalization group. So what we see is that the origin of the problem is the divergence that we get over here when we try to integrate all the way to infinity or the size of the system. So what we do instead is we said, OK, let's not integrate all the way. Let's replace the short distance cut-off that we had with something that is larger-- ba-- and rather than integrating all of a to infinity, we integrate only over short distance fluctuations between a and ba. This is our usual [INAUDIBLE]. So what we therefore get is that the k effective is k 1 minus 4pi q ky squared integral from a to ba br r to the 3 minus 2 pi k a to the 2 pi k minus 4. And then, I have to still deal with 4pi cubed ky squared, integral from ba infinity dr r to the 3 minus 2 pi k, a to the 2 pi k minus 4, plus order of y to the fourth. OK? All right. So, you can see that the effect of integrating this much is to modify the decoupling to a new value which depends on b, which is just k minus 4pi cubed k squared y squared, a to ba br r to the 3 minus 2 pi k. a to the 2 pi k minus 4. OK? And then, I can rewrite the expression for k effective to be this k tilde. And then, whatever is left, which is 4pi cubed k squared y squared integral ba to infinity dr r to the 3 minus 2 pi k, a to the 2 pi k minus 4, order of y to the fourth. You see that k has been shifted through this transformation by an amount that is order of y squared. So at order of y squared in this new expression, I can replace all the k's that are carrying with k tilde and it would still be correct for this order. Now I compare this expression and the original expression that I had. And I see that they are pretty much the same expression, except that in this one, the cut-off is ba. So I do step two of origin. I define my R prime to be dr so that my new cut-off will be back to a. So then, this whole thing becomes k effective is k tilde minus 4pi cubed k tilde squared y squared. Because of the transformation that I did over here, I will get a factor of b to the 4 minus 2 pi k tilde, integral from a' to infinity-- the r prime-- r prime to the 3 minus 2 pi k tilde, a to the 2 pi k tilde, minus 4 plus order of y to the fourteenth. So we see that the same effective interaction can be obtained from two theories that have exactly the same cut-off, a, except that in one case, I had k and y. In the new case, I have this tilde or k prime at scale b. And I have to replace where I had y with y with this additional factor. So the two theories are equivalent provided that I say that the new interaction at scale b is the old interaction minus 4pi cubed k squared y squared. This integral is easy to perform. It is just the power law. It is b to the fourth minus 2 pi k minus 1. And then, I have 4 minus 2 pi k order of y to the fourth. And my y prime is y-- from here I see b to the power of 2 minus pi k. AUDIENCE: [INAUDIBLE] PROFESSOR: This k squared? AUDIENCE: Oh, I see. Sorry. PROFESSOR: All right. So, our theory is described in terms of two parameters-- this y and this k. or let's say, it's inverse-- k inverse, which is more like temperature. And what we will show next time is that these recursion relations, when I draw it here, will give me two types of behavior. One set of behavior that parameterizes the low temperature dilute limit that corresponds to flows in which y goes through zero. So that when you look at the system at larger and larger lens scales, essentially it becomes less and less depleted of these excitations. So, once you have integrated the very essentially, you don't see any excitations. And then there's another phase, which as you do this removal of short distance fluctuations. You tend to flow to high temperatures and large densities. And so, that corresponds to this kind of face. Now the beauty of this whole thing is that these recursion relations are exact and allow us to exactly determine the behavior of these space transition in two dimensions. And that's actually one of the other triumphs of renormalization group is to elucidate exactly the critical behavior of this transition, as we will discuss next time. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 8_The_Scaling_Hypothesis_Part_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Hey. Let's start. So a few weeks ago we started with writing a partition function for a statistical field that was going to capture behavior of a variety of systems undergoing critical phase transitions. And this was obtained by integrating over configurations of this statistical field a rate that we wrote on the basis of a form of locality. And terms that were consistent with that were of the form m squared, m to the fourth. Let's say m to the sixth. Various types of gradient types of terms. And in principle, allowing for a symmetry-breaking field that was more in the form of h dot 1. And again, we always emphasized that in writing these statistical fields, we have to do averaging. We have to get rid of a lot of short wavelength fluctuations. And essentially, the future m of x, although I write it as a continuum, has an implicit short scale below which it does not fluctuate. OK, so we tried to evaluate this by certain point, and we didn't succeed. So we went phenomenologically and tried to describe things on the basis of scaling theory. Ultimately, this renormalization group procedure that we would like to apply to something like this. Now, there is a part of this that is actually pretty easy to solve. And that's when we ignore anything that is higher than second order in m. Because once we ignore them, we have essentially a generalized Gaussian integral. We can do Gaussian integrals. So what we are going to do is, in this lecture, focusing on understanding a lot about the behavior of the Gaussian version of the theory. Which is certainly a diminished version, because it doesn't have lots of essential things. And then gradually putting back all of those things that we have not considered at the Gaussian level. In particular, we'll try to do with them with a version of a perturbation theory. We'll see that standard perturbation theory has some limitations that we will eventually resolve by using this renormalization procedure. OK. So what happens if I do that? Why do I say that that theory is now solve-able? And the key to that is, of course, to go into Fourier representation. Which, because the theory that I wrote down has this inherent translational of symmetry, Fourier representation decouples the various m's that are currently connected to their neighborhood by these gradients and high orders. So let's introduce a m of q, which is the Fourier transform of m of x. Let's see. m of x. And these are all vectors. And I should really use a different symbol, such as m [INAUDIBLE], to indicate the Fourier components of this field m of x. But since in the context of renormalization group we had defined a coarse grained field that was in tilde, I don't want to do that. I hope that the argument of the function is sufficient indicator of whether we are in real space or in momentum space. Initially, I'll try to put a tail on the m to indicate that I'm doing Fourier space, but I suspect that very soon I'll forget about the tail. So keep that in mind. So if I-- oops. OK. m of q. So if I go back and write what this m of x is, it is an integral over 2, 2 pi to the, d to the minus iq dot x with m of q. Now, I also want to at some stage, since it would be cleaner to have this rate in terms of a product of q's, remind you that this could have obtained, if I hadn't gone to the continuum version-- if I had a finite system-- to a sum over q. And the sum over q would be basically things that are separated q values by multiples of 1 over the size of the system. And e to the minus iq dot x. This m with the cues that are now discretized. But let's remember that the density of state has a factor of 1 over v. So if I use this definition, I really should put the 1 over v here when I go to the discrete version. And I emphasize this because previously, we had done Fourier decomposition where I had used the square root of v as a normalization. It really doesn't matter which normalization you use at the end as long as you are consistent. We'll see the advantages of this normalization shortly. AUDIENCE: Is there any particular reason for using the different sign in the exponential? PROFESSOR: Actually, no. I'm not sure even whether I used iqx here or minus iqx here. It's just a matter of which one you want to stick with consistently. At the end of the day, the phase will not be that important. So even if we mistake one form or the other, it doesn't make any difference. So if I do that, then again, to sort of be more precise, I have to think about what to do with gradients. Gradients, I can imagine, are the limit of something like n at x plus A minus n at x divided by A. If this is a gradient in the x direction. And I have to take the limit as A goes to 0. So when I'm thinking about this kind of functional integral, keeping in mind that I have a shortest landscape, maybe one way to do it is to imagine that I discretize my system over here into spacing of size A. And then I have a variable on each size, and then I integrate every place, subject this replacement for the gradient. Again, what you do precisely does not matter here. If you remember in the first lecture when we were thinking about the dl lattice system and then using these kinds of coupling between springs that they're connecting nearest neighbors, what ended up by using this was that when I Fourier transformed, I had things like cosine. And then when I expanded the cosine close to q, close to 0, I generated a series that had q squared, q to the fourth, et cetera. So essentially, any discretized version corresponds to an expansion like this with sufficient [INAUDIBLE] powers of q in both. So at the end of the day, when you go through this process, you find that you can write the partition function after the change of variables to m of x to m of q to doing a whole bunch of integrals over different q's. So, essentially you would have-- actually, maybe I will explicitly put the product over q outside to emphasize that essentially, for each q I would have to do independent integrals. Of course, for each q mode I have, since I've gone to this representation of a vector that is n-dimensional, I have to do n integrals on n tilde of q. On-- m will be the tail of q. And if I had chosen the square root of V type of normalization, the Jacobian of the transformation from here to here would have been 1. Because it's kind of a symmetric way of writing things. Because I chose this way of doing things, I will have a factor of V to the n over 2 in the denominator here. But again, it's just being pedantic, because at the end of the day, we don't care about these factors. We are interested in things like this singular part of the partition function as it depends on these coordinates. This really just gives you an overall constant. Of course, how many of these constants you have would depend basically how you have discretized the problem. But it is a constant independent of tnh, not something that we have to worry about. Now what happens to these Gaussian factors? Essentially, I have put the product over q outside. So when I transform this integral over xm squared goes over to an integral over q, m of q squared, which then I can write as a product over those contributions. And what you will get is t plus, from here, you will get a Kq squared, put in Lq to the fourth and all kinds of order terms that I have included. Multiplying this m component vector m of q squared. Again, reminding you this means m of q, m of minus q, which is the same thing as m star of q, if you go through these procedures over here. There is 2. And this factor of the v actually will come up over here. So previously, I had used the normalization square root of V, and I didn't have this factor of 1 over V. Now I have put if there, I will have that factor. Yes? AUDIENCE: m minus q is star q only if it is the real field, right? If m is real. PROFESSOR: Yes. And we are dealing with the field m of q of this. AUDIENCE: And in the case of superfluidity? PROFESSOR: In the case of superfluidity? So let's see. So we would have a psi of q integral d dx into the i q dot x psi of x. If I Fourier transform this, I will get a psi star of q integral into the x into the minus [INAUDIBLE] x psi star of x. So what you are saying is that in the case where psi of x is a complex number-- I have psi1 plus ipsi2-- here I would have psi1 minus ipsi2. So here I would have to make it a statement that the real part and the imaginary part come when you Fourier transform with an additional minus. But let's remember that something like this that we are interested is psi1 squared plus psi2 squared. So ultimately that minus sign did not make any difference. But it's good to sort of think of all of these issues. And in particular, we are used to thinking of Gaussians, where I would have a scalar and then I would have x squared. When I have this complex number and I have psi of q, psi of minus q, then I have a real part squared plus an imaginary part squared. And you have to think about whether or not you have changed the number of degrees of freedom. If you basically integrate over all q's, you may have problems. You may have at some point to think about seeing psi of q and psi of minus q star are the same thing. Maybe you have to integrate over just the positive values. But then at each q you will have two different variables, which is the real part and the imaginary part. So you have to think about all of those doublings and halvings that are involved in this statement. And in the notes, I have the writeup about that that you go and precisely check where the factors of one half and two go. But ultimately, it looks as if you're dealing with a simple scalar quantity. So I did not give you that detail explicitly, but you can go and check it in the important issue. The other term that we have. One advantage of this normalization is that h multiplies the integral of m of x, which is clearly this m with a tail for q equals to 0. So that's [INAUDIBLE] mh dotted by this m [INAUDIBLE]. Yes? AUDIENCE: This is assuming a uniform field? PROFESSOR: Yes, that's right. So we are thinking about the physics problem, but we added the uniform field. So if you are for some physical reason interested in a position where you feel you can modify that, then this would be h of q, m of minus q. Actually, one reason ultimately to choose this normalization is that clearly what appears here is a sum of q. If I go over to my integral over q, then the factor of 1 over V disappears. So that's one reason-- since mostly after this, going through the details we'll be dealing with the continuum version-- I prefer this normalization. And we can now do the Gaussian integrals. Basically, there's an overall factor of 1 over V to the n over 2 for each q mode. Then each one of these Gaussian integrals will leave me a factor of root 2 pi times the variant. So I will get 2 pi. The variance is V divided by t plus k q squared plus lq to the fourth, and so forth. Square root, but there are n components, so I will get something like this. And then the term that corresponds to q equals to 0 does not have any of this part. So it will give a contribution even for q equals to 0 that is like this. But you have a term that shifts the center of integration from m equals to 0 because of the presence of the field. So you will get a term that is exponential of essentially-- completing the square-- will give you V divided by 2t times h squared. Now, clearly the thing that I'm interested is log of Z as a function of t and h. I'm interested in t and h dependents. So there is a bunch of things that are constants that I don't really care. And then there is a, from here, minus 1/2, actually minus n 1/2 sum over q log of t plus k q squared and so forth. And plus here, I have V a squared over 2t. So I can define something that's like if the energy from log of Z divided by the volume. And you can see that once I replace this sum of a q with an integral, I will get a factor of volume that I can disregard then. So there's some other constant. And then I have plus n over 2 integral over q divided by q pi to the d log of q plus k q squared, and so forth. Minus V k squared divided by 2t. Now, again, the question is what's the range of q's that I have to integrate, given that I'm making things that are coarse grained. Now, if I were to really discretize my system and, say, put it on q and you plot this, then the allowed values of q would leave on the [INAUDIBLE] zone. [INAUDIBLE] zone, say, in the different directions in q would be something like the q that would be centered around pi over a. But it would be centered at 0, but then you would have pi plus pi over a. Yes? AUDIENCE: The d would disappear, right? PROFESSOR: The d would disappear because I divided by it. So in principle, if I had done the discretization to a cube and plot this, I would have been integrating over q that this would find to a cube like this. But maybe I chose some other lattice like a diamond lattice, et cetera. Then the shape of this thing would change. But what's the meaning of doing the whole thing on a lattice anyway? The thing that I want to do is to make sure that I have done some averaging in order to remove short wavelength fluctuations. So a much more natural way to do that averaging and removing short wavelength operations is to say that my field has only Fourier components that are from 0 to some maximum value of lambda, which is the inverse of some radiant. And if you are worried about the difference in integration between doing things on this nice mirror that has nice symmetry and maybe doing it on a cube, then the difference is essentially the bit of integration that you would have to do over here. But the function that you are integrating his no singularities for large values of q. You are interested in the singularities of the function when t goes to 0. And then the log has singularities when its argument goes to 0. So I should be interested, as far as singularities are concerned, only in the vicinity of this point anyway. What I do out there, whether I replace the sphere with the cube or et cetera, will add some other non-singular term over here, which I don't really care. Actually, if I do that, this non-singular term here could be actually functions of t. But they would be very perfect and regular functions of t. Like constant plus alpha t, plus pheta q squared, et cetera, that have no singularities. So if I'm interested in singularities, I am going to be focused on that. Now actually, we encountered this integral before when we were looking at corrections to the saddle-point approximation. And if you remember what we did then was to take, let's say, C of d of h across 0 while taking two derivatives of this free energy with respect to t. And then we ended up with an integral. There's a minus sign here over d. n over 2 integral dt 2 pi squared. 2 pi to the d. Taking two derivatives of the log. The first derivative will give me 1 over the argument. The second derivative will give me 1 over the argument squared. One side take care of the minus sign. Now, I think this is a kind of integral, after I have focused on the singular part, that I can replace when integrating over a sphere. Now, when I integrate over a sphere, I may be concerned about what's going on at small values. At q, at small values of q, as long as t is around, I have no problem. When t goes to 0, I will have to worry about the singularity that comes from 1 over k, 2 squared, et cetera. So that's really the singularity that I'm interested in. Exactly what happens at large q, I'm not really all that interested in. And in particular, what I can do is I can rescale things. I can call q squared over t to the x squared. So I can essentially make that change over there. So that whenever I see a factor of q, I replace it with t over k to the 1/2 x. What happens here? I have, first of all, n over 2. I have 1 over 2 pi to the d. Writing this in terms of spherical symmetry, I will have the solid angle d dimensions. And then I would have q to d minus 1 q. Every time I put a factor of q, I can replace it with this. So I would have a t over k with a power of q/2. And then I have my integral that becomes the x, x to the d minus 1, 1 plus x squared plus potentially higher order things like this. Now, the upper cut-off for x is in fact square root k over t times lambda. And we are interested in the limit of when t goes to 0. So that upper limit is essentially going to infinity. Now, whether or not this integral, if I learn to ignore higher order terms and focus on the first term, exists really depends on whether d is larger, d minus 1 plus 1 d minus 4 is positive or negative. And in particular, if I learn to get rid of all those higher order terms. And basically, the argument for that is the things that would go with x to the fourth, et cetera, if we carry additional factors of t-- and hopefully getting rid of them as to go to 0-- will give me an integral like this. This will exist only if I am in dimensions d that is less than 4. Yes? AUDIENCE: Are you missing the factors of t over t that comes with the denominator? PROFESSOR: Yes. There is a factor of 1 over t here. So I have to put out the factor of t. Write this as 1 plus k over t plus the element of t, et cetera. So there is a factor of 1 over t. AUDIENCE: t squared. PROFESSOR: And that's a factor of t squared, because that's two powers. So if I'm in dimensions d less than 4, what I can write is that this c singular, this as t goes to 0. The leading behavior, this goes to the constant. So as we discussed, after all of the mistakes that I made, there will be some overall coefficient A. The power of t will be d over 2 minus 2. d over 2 came from the integrations. 1 over t squared came from the denominator. And then if I were to expand all of these other terms that we've ignored, higher powers of-- here I will get various series that will correct this. But the leading key dependents in dimensions less than 4 is this thing that we had seen previously. Now I can take this, and you see that in dimensions d less than 4, this is a singular term that is diversion. If I were to say what kind of thing was of the energy that gave result to this? Then it would say that if the energy must have had some other constant that was proportionate of the t to the d over 2, that when I put two derivatives, I got something like this. Of course, if the energy could also have had a term that was linear in t, I wouldn't have seen it. So there is a singular part. Essentially, if I were to do that integral in dimensions less than fourth, I will get a leading singularity that is applied. I will get a singularity that is like this. I will get additional terms per constant-- t, t squared, et cetera-- and singular terms that are subbing in to this one. And then, of course, I have a term that is minus h squared over t if I were to include this here. So why don't I write the answer as B minus h divided by t to the 1/2 plus d/4, the whole thing squared. So what I did was essentially I divided and multiplied by inputting d and put the whole thing in the form of h divided by t to something squared? Why did I do that? It's because we had first related a singular form for the energies in the scaling picture that had the E to the 2 minus alpha in front of them and the function of h t to the delta. And all I wanted to emphasize is that this picture, 2 minus alpha is d over 2. And the thing that we call the gap exponent is 1/2 plus d over 4. Of course, I can't use this theory as a description of the case. And the reason for that is that the Gaussian theory exists and is well-defined only as long as t is positive. Because once t becomes negative, then the rate essentially becomes ill-defined. Because if I look at the various rates that I have here, we certainly-- the rate for q equals to 0. It is proportional to minus t over 2v. If the t changes sign, rather than having a Gaussian, I have essentially a rate that is maximized as [INAUDIBLE]. So clearly, again, by issue of stability, the theory for t negative does not describe a stable theory. And that's why n to the fourth and all of those terms will be necessary to describe that side of the phase transition. So if you like, this is a kind of a description of a singularity that exists only in this half of the space. Kind of reminiscent of coming from the disordered side, but I don't want to give it more reality than that. It's a mathematical construct. If we want to venture to make the connection to the actual phase transition, we have to prove the n to the fourth. Now, the only reason to go and recap this Gaussian theory is because since it is solve-able, we can try to use it as a toy model to apply the various steps of renormalization group that we had outlined last lecture. And once we understand the steps of renormalization group for this theory, then it gives us an anchoring point when we describe the full theory that has n to the fourth, et cetera-- how to sort of start with the renormalization approach to the theory as we understand and do the more complicated. So essentially, as I said, it's not really a phase transition that can be described by this theory. It's a singularity. But its value is that it is this fully-modelled anchoring point for the full theory that we are describing. So what we want to do is to do an RG for the Gaussian model. So what is the procedure. We have a theory best described in the space of variables q, the Fourier variables. Where I have modes that exist between 0-- very long wavelength-- to lambda, which is the inverse of some shortest wavelength that I'm allowing. And so basically, I have a bunch of modes m of q that are defined in this range of qx. The first step of RG was to coarse grain. The idea of coarse graining was to change the scale over which you were doing the averaging from some a to ba. So average from a to ba of fluctuations. So once I do that, at the end of the day I have fluctuations whose minimum wavelength has gone from a to ba. So that means that q max, after I go and do this procedure, is the previous q max that I had divided by a factor of b. So basically, at the end of the day I want to have, after coarse graining, variables that only exist up to lambda over b. Whereas previously, they existed after that. So this is very easy at this level. All I can do is to replace this m tilde of q in terms of two sets. I will call it to be sigma if q is greater than this lambda over b. That is, everybody that is out here, their q-- I will call it q larger. Everybody that is here, their q I will call q lesser. And all the modes that were here, I will give them a different name. The ones here I will call sigma. The ones here, if q less than lambda over b, will get called m tilde. So I just renamed my variables. So essentially, right here I had integration over all of the modes. I just renamed some of the modes that were inside q lesser and sigma-- and tilde, the ones that are outside q greater. So what I have to do for my Gaussian theory. Let's write it rather than in this form that was discrete in terms of the continuum. I have to iterate over all configurations of these Fourier modes. So I have these m tilde of q's. And the wave that I have to assign to them when I look at the continuum is exponential, integral in dq q to pi to the d. T plus kq squared, and so forth. And tilde of q squared. And then I had the one term that was hm of 0. What I have done is to simply rewrite this as two sets of integrations over the-- whoops. This was m. m, let's call is sigma first-- sigma of q larger integrate over m tilde of q lesser. And actually, you can see that the modes here and the modes here don't talk to each other. And that's really the advantage of doing the Gaussian theory. And the thing that allowed me to solve the problem here and also to do the coarse graining there. Once we do things like n to the fourth, then I will have couplings between modes that go across between the three sets. And then the problem becomes difficult. But now that I don't have that, I can actually separately write the integral as two parts. And this is for q lesser. And for each one of them, I essentially have the same rate. The integral over q greater goes between lambda over d and lambda. The integral over m tilde of q lesser is essentially the same thing. Exponential minus integral 0 to lambda over d. dv q lesser to five to the d, t plus kq lesser squared, and so forth. And q lesser squared. And then I have the additional term which sits at 0. It is part of the modes that are assigned with q lesser. OK? Fine. Nothing particularly profound here. In fact, it's very simple. It's just renaming two sets of modes. And the averaging that I have to do, and getting rid of the fluctuations at short wavelength, here is very trickier. Because this is just a bunch of integrations that I had to do over here, but it is only over things that are sitting close to the edge of this [INAUDIBLE]. So essentially, the integrations over these modes is doing this integral over here, from lambda over d to lambda, and none of the singularities has anything to do with the range of integration from lambda over d to lambda. So the result of doing all of that is simply just a constant-- but not a constant. It's a function of t that is completely non-singular and have a nice state of expansion powers of t. A kind of [INAUDIBLE] I call non-singular functions sometimes. Constant thing is that eventually if you take sufficiently high derivatives, I guess, of this value, the t dependents [INAUDIBLE]. So all of the interesting thing is really in this m tilde of k lesser. And really, the eventual process of renormalization in this picture is something like this. That all of the singularities are sitting at the center of this kind of orange-shaped entity. And rather than biting the whole thing, you kind of cut it slowly and slowly from the edge, approaching to where all of the exciting things are at the center. For this problem of the Gaussian, it turns out to be trivial to do so. But for the more general problem, it can be interesting because procedure is the same. We are interested in what's happening here, but we gradually peel of things that we know don't cause anything difficult for the problems. So then I have to multiply with this, and I have found in some sense a probability for configurations of the coarse grain system, which is simply given by this. But then renormalization group has two other steps. The second step was to say, well, in real space, as we said, the picture that is represented by these coarse grain variables is grainy. If my pixels were previously one by one by one, now my pixels are d by d by d. So I can make my picture look to have the same resolution as my initial picture if I rescale all of the events for a factor of t. In momentum representation, or intuitive presentation, it corresponds to rescaling all of the q's by a factor of B. And clearly, what that serves to achieve is that if I replace q lesser with B times q prime, then the maximum value will go back to 0 to lambda. So by doing this one in formation, I can ensure that the upper cut-off is, in fact, lambda again. Now, there was another thing, which in real space we said that we defined m prime to be m tilde rescaled by some factor zeta. I had to do a change of the contrast. I did have to do the same change of contrast here, except that the variables that I am dealing with here, it was in x coordinates. What I want to do it is in the q coordinate. So I will call m with a tail prime of q prime to be m tilde of q prime by a factor of z. The difference between the z and the zeta, which is real space and Fourier space is just the fact that in going from one to the other, you have to do integrations over space. So dimensionally, there is a factor of b to the d difference between the rescaling of this quantity and that quantity, and if you want to use or the other zeta against b to the minus d and z. But since we would be doing everything in Fourier space, we would just use this factor traditionally. So if I do that, what do I find? I find that Z of t of h is exponential of some singular, non-singular dependents. And then I have to integrate over these new variables, m prime of q prime. Yes? AUDIENCE: In your real space renormalization your m tilde is a function of an x. But in your Fourier space representation your m tilde is a function of q prime? PROFESSOR: I guess I could have written here x prime, also. It doesn't really matter. So do you have here? You have exponential minus the integral. The integration for q prime now is going back to 0 to lambda. I have db of q prime divided by 2 pi to the d. Now, you see that every time I have a q-- V or q, q lesser, in fact-- I have to go to q prime by introducing a factor of the inverse. So there will be a total factor of V to the minus V that comes from this integration. And that will multiply t. That will multiply kb to the minus d. But then here I have to q's because of the q squared there. Again, doing the same thing, I will get V to the d minus two. I had q plus 2, if you like. And then the next l would be lb to the d minus 4. And you can see that as I have higher and higher derivatives of q, I get higher and higher powers with negative [INAUDIBLE]. But then I have m tilde that I want to replace with m prime. And that process will give me a factor of z squared. And then I have m prime of q prime squared. There is no integration for this terms. It's just one mode. But each mode I have rescaled by a factor of z. So I will have a term that is z h dot m prime of 0. So what we see is that what we have managed to do is to make the Gaussian integration over here precisely the same thing as the Gaussian integration that I started with. So I can conclude that this function tnh that I am interested in has a path that is non-singular. But its singular part is the same as the same z calculated for a bunch of new parameters. And in particular, the new t is v to the minus d z squared the old t. The new k is b to the minus d minus 2 z squared q. The new L would be to the minus d minus 4 z squared L, and so forth. And the new h is zh. Yes? AUDIENCE: There should be q prime squared and q prime 4? PROFESSOR: Yes. Yes. This is my day to do a lot of algebraic errors. OK. So what is the change in parameters? So I wrote it over there. So this kind of captures the very simplest type of renormalization. Actually, all I did was a scaling analysis. If I were to change positions by a factor of b and change the magnitude of my field m by a factor z or zeta, this is the kind of results that I will get. Now, how can we make this capture the kind of picture that we have over here in the language of renormalization? Want to be able to change two parameters and reach a fixed point. So we know that kind of [INAUDIBLE] that t and h have to go to 0. They are the variables that determine essentially whether you are at this said similar point. So if t and h I forget, the next most important term that comes into play is k prime, which is some function of k. And if I want to be at the fixed point, I may want to choose the factor z such that k prime is the same as k. So choose z such that k prime is k. And that tells me immediately that z would be b to the power of 1 plus d over 2. If I choose that particular form of z, then what do I get? I get t prime is z squared b to the minus b. So when I do that, I will get b squared t. I get that h prime is just z times h. So it is b to the 1 plus b over 2 times h. These are both directions that as b becomes larger than 1, b prime becomes larger than th prime, becomes larger than h. These are relevant directions. I would associate with them eigenvalues y dt minus 2. Divide h. That is 1 plus d over 2. So if I go according to the scaling construction that we had before, f singular of tnh is t to the power d over y dt, some scaling function of h, g to the power of divide h over y dt. This is what we have established before. With these values I will get t to the d over 2, some scaling function of h, t to the power of 1/2 plus d over 4. We can immediately compare this expression and this expression that we have over here. Yes? AUDIENCE: Wait. What's the reason to choose scale as the parameter that maps onto itself and not L? PROFESSOR: OK. I'll come to that. So having gone this far, let's see what l is doing. So if I put here-- you can see that clearly L has v to the minus 2 compared to k. So currently, the way that we established, L prime is b to the minus 2m. If I had a higher derivative, it would be b to a minus larger number, et cetera. So L, out of these other things, are irrelevant variables. So they are essentially under rescaling, under looking at the system in larger and larger scale, they will go to 0. And I did get a system that has the same topological structure as what I had established here. Because I have to tune two parameters in order to reach the critical point. Let's say I had chosen something else. If I had chosen z such that L prime equals to L. I could do that. Then all of the derivatives that are higher factors of q in this [INAUDIBLE], they would be all irrelevant. But then I would have k, t, and h all with irrelevant variables. So yeah, it could be that there is some physics. I mean, certainly mathematically I can ask the system what happens if k goes to 0. I kind of ignore the k dependencies that I have in all of these expressions, but there are going to be singular dependencies on k. So if there is indeed some experimental system in which you have to tune, in addition temperature, something that has to do with the way that the spins or degrees of freedom are coupled to each other, and that coupling changes sign from being positive to being negative, you go from one type of behavior to another type of behavior, maybe this would be a good thing for it. But you can see the kind of structure you would get if k has to go to 0, you go from a structure where things want to be in the same direction to things that want to be anti-parallel. And then clearly you need higher order terms to stabilize things so that your singularity does not go all the way to 0 wavelength, et cetera. So one can actually come up with physical systems that kind of resemble that, were there is some landscape that is also chosen. But for this very simplest thing that we are doing, this is what is going on. But you could have also asked the other question. So clearly we understand what happens if you choose z so that some term is fixed and everything above it is relevant, everything below it is irrelevant. But why not choose z such that t is fixed? So that's going to be b to the d over 2, then t prime equals to t. If I choose that, then clearly the coupling k will already be irrelevant. So this is actually a reasonable fixed point. It's a fixed one that corresponds to a system where k has gone to 0, which means that the different points don't talk to each other. Remember, when we were discussing the behavior of correlation lens at fixed points, there was two possibilities-- either the correlation lens was infinite or it was 0. So if I choose this, then k prime will go eventually to 0. I go towards a system in which the degrees of freedom are completely decoupled from each other. Perfectly well-behaved. Fixed behavior that corresponds to 0 correlation lens. And you can see that if I go through this formula that I told you over here, zeta in real space would be b to the minus d over 2. And what that means is that if you average independent variables over a size b, the scale of fluctuation is because of the central limit theeorem is the square root of the volume. So that's how it scales. So essentially, what's at the end of the story? That's a behavior in which there is only one coefficient event-- forget about h. The eventual rate is just t over 2m squared at different points. That's the central limit here. So through a different route, we have rediscovered, if you like, the central limit theorem. Because if you average lots of uncorrelated variables, you will generate Gaussian rates. So what we are really after in this language is how to generalize the central limit theorem, how to-- as we find the analog of a Gaussian, the degrees of freedom that are not correlated but talk to their neighborhood. So the kind of field theory that we are after are these generalizations of central limit theorem to the types of field theories that have some locality enablement. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: So wherever you can define the renormalization you're finding different z's? PROFESSOR: Yes. AUDIENCE: We can tune how many parameters we want to be able to-- PROFESSOR: Exactly. Yes. And that's where the physics comes into play. Mathematically, there's a whole set of different fixed points that you can construct for choosing different z's. You have to decide which one of them corresponds to the physical problem that you are working on. AUDIENCE: Yes. So if the fixed point stops being just defined by the nature of the system, but it's also depends on how we define renormalization? On mathematical descriptions and-- PROFESSOR: If by how we define renormalization looking to choose z, yes, I agree with you. Yes. But again, you have this possibility of looking at the system at different scales. But we have been very agnostic about what that system is. And so you how many ways of doing things. Ultimately, you need some reality to come and choose among these different ways. Yes? AUDIENCE: So you do want to keep k a relevant variable in group problems, right? PROFESSOR: No. I make k to be a fixed variable. AUDIENCE: Oh, exactly. Why don't you add a small amount, like an absolute to the power of bf and [INAUDIBLE] point z. Plus or minus, doesn't matter. Why the equality assumption exactly? And the smaller one doesn't change anything? All the other variables like L become irrelevant? PROFESSOR: OK. So the point is that it is b raised to some power. So here I had, I don't know, Katie k prime was k. And you say, why not kb to the absolute? AUDIENCE: Yeah, exactly. PROFESSOR: Now, the thing that I'm interested in what happens at larger and larger scale. So in principle, I should be able to make v as large as I want. So I don't have the freedom that you mentioned. And you are right in the sense that, OK, what does it mean whether this ratio is larger than or smaller than what? But the point is that once you have selected some parameter in your system-- L or whatever you have, some value-- you can, by playing around with this, choose a value of V for any epsilon such that you reach that limit. So by doing this, you in a sense have defined a lens scale. The lens scale would depend on epsilon, and you would have different behaviors, whether you have shorter than that lens scale or larger than that lens scale. So this has to be done precisely because of this freedom of making b larger, and so on. Now, if you are dealing with a finite system and you can't make your b much larger than something or whatever, then you're perfectly right. Yes? AUDIENCE: Physically, z or zeta should be whatever type quantity is needed to actually make it look exactly the same-- where it keeps coming out. PROFESSOR: Exactly, yes. That's right. AUDIENCE: And then we know, because we already know that we have two relevant variables, that z has to look this way for a system that has two relevant variables. PROFESSOR: For the Gaussian one, right. AUDIENCE: Yeah. But then if we had a different kind of system, then actually, just going from the physical perspective, we would need a different z to make things look the same. And that would give us a different number of variables here. PROFESSOR: Yes. That's right. Now, in terms of that practically in all cases we either are dealing with a phase that has 0 correlation on that, and then this Gaussian behavior and central limit theorem is what we are dealing-- and the averaging is by 1 over volume. Or we have something that is very pretty close to this big [INAUDIBLE] that we have now discovered, which is just the gradient squared. And that has its own scaling according to these powers that I have found here, and I will explain that more deeply. It turns out that at the end of the day, that when we look at real phase transitions, all of these exponents will change, but not too much. So this Gaussian fixed point is actually in some sense rather close to where we want to end up. So that's why it's also an important anchoring point, as I just mentioned. Again, I said that essentially what we did was take the rate that we had originally, and we did a rescaling. So basically, we replace x by-- let me get the directions there. So we replace x by bx prime. If I had started being in real space, I would have replaced m with zeta m prime. m after getting rid of some degrees of freedom. Again, zeta m prime. Before I just do that to the rate that I had written before, there was a beta h. Which was we could derive d d x t over 2m squared, um to the fourth and higher order terms, k over 2 gradient m squared, L over 2 Laplacian of m squared and so forth. Just do this replacement of things. What do I get? I get that t prime is b to the d. Whenever I see x, I replace it with dx prime. Whenever I see m, I replace it with zeta m prime. So I get here the zeta squared. u prime would be b to the d zeta to the fourth. k prime would be b to the b minus 2 zeta squared. L prime would be b to the d minus 4 zeta squared, and so forth. Essentially. All I did was replace x with b times x prime and m with zeta m prime. If I do that throughout, you can see how the various factors will change. So I didn't do all of these integrations, et cetera that I did over here. I just did the dimensional analysis, if you like. And within that dimensional analysis now in real space, if I set k prime to be k, you can see that zeta is d to the 2 minus d over 2. And again, you can see that once I have fixed k, all of the things that have the same power of m but two higher derivatives would get a factor of b to the minus 2, just as we had over here. Again, with this choice, you can check that if I put it back here, I would get b squared. But let's imagine that I have a generalization of m to the n. If I have a term that multiplies m to some power p-- with the coefficient up-- then under this kind of rescaling I will get up prime is b to the d zeta to the power of p, up. And with this choice of zeta, what do I get? I will get b to the d. And then I will get plus p 1 minus d over 2 times up. Which I can define to be b to some power yp times up. Look here to make sure. So my yp, the dimension of something that multiplies m to some power p is simply p plus d 1 minus p or 2. And let's check some things. I have y1. y1 would correspond to a magnetic field, something that is proportional to the m itself. And if I push p close to 1, I will get 1 plus d over 2. And that is, indeed, the yh that we had over here. 1 plus d over 2. So this is yh. If I ask what is multiplying m squared, I put p equals to 2 here. I will get 2, and then here I would get 1 minus 2 over 2. So that's the same thing. This is the thing that we were calling before yt. We didn't include any nq term in the theory, didn't make sense to us. But we certainly included the u that was multiplied in m to the fourth. AUDIENCE: So is the p [INAUDIBLE] in the yp? PROFESSOR: There is p times 1 plus d 1 minus p over 2. p over 2. Just rewrote it. If I look at 4, here would be 4. And then I would put 1 minus 4 over 2, which is 1 minus 2, which is minus 1. So I would get 4 minus z. If I look at y6, I would get 6 minus 2d. And so forth. So if I just do dimensional analysis and I say that I start with a fixed point that corresponds to gradient of m squared, and everybody else 0, and I ask, if in the vicinity of the fixed point where k is fixed and everybody else is 0 I put on a little bit up any of these other terms, what happens? And I find that what happens is that certainly the h term, the term that is linear, will be relevant. The term that is m squared is relevant. Whether or not all the other terms in the series-- like m to the fourth, m to the sixth, et cetera-- will be relevant depends on dimension. So once more we've hit this dimensional fork. So the term m to the fourth that we said is crucial to getting this theory to have some meaning-- and there's no reason for it to be absent-- is, in fact, relevant. In fact, close to three dimensions you would say that that's really the only other term that is relevant. And you'd say, well, it's almost good enough. But almost good enough is not sufficient. If we want to describe a physical theory that has only two relevant directions, we cannot use this fixed point, because this fixed point has three relevant directions in three dimensions. We have to deal with this somehow. So what will we do? Next is to explicitly include this m to the fourth. In fact, we will include all the other terms, also. But we will see that all the other terms, all the higher powers, are irrelevant in the same sense that all of these higher derivative terms are irrelevant. But that m to the fourth term is something that we really have to take care of. And we will do that. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 14_Position_Space_Renormalization_Group_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Let's start. So last lecture we started on the topic of doing renormalization in position space. And the idea, let's say, was to look at something like the Ising model, whose partition function is obtained, if you have insides, by summing over all the 2 to the n configuration of a weight that tends to align variables, binary variables that are next to each other. And this next to each other is indicated by this nearest neighbor symbol, sigma i, sigma j. Potentially, we may want to add the magnetic field like term that [INAUDIBLE]. The idea of the renormalization group is to obtain a similar Hamiltonian that describes interactions among spins that are further apart. We saw that we could do this easily in the case of the one-dimensional system. Well, let's say you have a line, and you have sites on the line. Each one of them wants to make their neighbor parallel to itself. And what we saw was that I could easily get rid of every other spin and keep one set of spins. If I do it that, I get a partition function that operates between the remaining spins. Was very easy to sum over the two variables that spin in between these two could have and conclude that after this step, which corresponds to removing half of the degrees of freedom, I get a new interaction, k prime, which was 1/2 log hyperbolic cosine of 2K, if h was zero. And we saw that that, which is also a prototype of other systems in one dimension, basically is incapable of giving you long-range order or phase transition at finite temperature that corresponds to finite k. So the only place where you could have potentially ordering over large landscape is when k becomes very large at zero temperature. We saw how the correlation length behaves and diverges as you approach zero temperature in this type of model. Now, the next step would be to look at something that is two-dimensional. And in this context, I describe how it would be ideal if, let's say, we start with a square lattice. We have interactions k between neighbors. And we could potentially do the same thing. Let's say remove one sublattice of spins, getting interactions among the other sublattice of spins that would again correspond to removing half of the spins in the system. But in terms of length scale change, it corresponds to square root of 2. This length compared to the old length is different by a factor of square root of 2. But the thing that I also indicated was that this spin is now coupled to all four of them. And once I remove the spin, I can generate new interactions operating between these spins. In fact, you will generate also a four-spin interaction, so your space of parameters is not closed under this procedure. The same applies to all higher dimensional systems, and so they are not really solvable by this approach unless you start making some approximations. So the particular approximation that I introduced here was done and applied shortly after Cavenaugh brought forth this idea of removing degrees of freedom and renormalization by-- I'll write this once, and hopefully I will not make a mistake. Niemeijer and van Leeuwen And it's a kind of cumulant expansion, as I will describe shortly. It's an approximation. And rather than doing the square lattice, it is applied to the triangular lattice. And that's going to be the hardest part of this class for me, to draw a triangular lattice. Not doing a good job. So basically, we put our spins, sigma i plus minus 1, on the sides of this. And we put an interaction k that operates between neighbors. And we want to do a renormalization in which we reduce the number of degrees of freedom. What Niemeijer and van Leeuwen suggested was the following. You can group the sublattices of the triangular lattice into three. I have indicated them by 1, 2, 3. Basically, there is going to be some selection of sublattice sites on this lattice. What they suggested was to basically define cells-- so this would be one cell. This would be another cell. This would be another cell. This would be another cell over here-- such that every side of the original lattice belongs to one and only one site of these cells. OK. So basically, I guess the next one would come over here. All right. So let's call the site by label i. And let's give the cells label that I will indicate by Greek. So sites, I will indicate i, j, cells, by Greek letters, alphabet, et cetera. So that, for example, we can regard this as cell alpha, this one, this triangle, as forming cell beta. Now, the idea of Niemeijer and van Leeuwen was that to each cell, we are going to assign a new spin that is reflective of the configuration of the site spins. So for this, they propose the majority rule. Basically, they said that we call the spin for site alpha to be the majority of the site spins. So basically, if all three spins are plus or all three of them are minus, you basically go to plus or minus. If two of them are plus and one of them is minus, you would choose the one that is a majority. So you can see that this also has only two possibilities, plus 1 and minus 1, which would not have taken place if I had tried to do a majority of two sites. It would have worked if I had chosen a majority of three sites on a one-dimensional lattice clearly. So that's the rule. So you can see that for every configuration that I have originally, I can do these kinds of averaging and define a configuration that exists for the cells. And the idea is that if I weigh the initial configurations according to the weight where the nearest neighbor coupling is k, what is the weight that governs these new configurations for the averaged or majority cell spins? Now, to do this problem exactly is subject to the same difficulty that I mentioned before. That is, if I somehow do an averaging over the spins in here to get the majority, then I will generate interactions that run over further neighboring, as we will see shortly. So to remove that, they introduced a kind of uncontrolled break-up of the Hamiltonian that governed the system. That is, they wrote beta H minus beta H, which is the sum over all neighboring site spins, as the sum of a part that then cancels perturbatively and a part of that will treat as a correction to the part that we can solve exactly. The beta H zero is this sum over all cells alpha. What you do is basically you just include the interactions among the cell spins. So I have K sigma alpha 1 sigma alpha 2 sigma alpha 2 sigma alpha 3 sigma alpha 3 sigma alpha 1. So basically, these are the interactions within a cell. What have I left out? I have left out the interactions that operate between cells, so all of these things, which, of course, are the same strength but, for lack of better things to do, they said OK. We are going to sum over all, you can see now, neighboring cells. So the things that I left out, let's say, interactions between this cell alpha and beta, evolve-- let's say the spin number 1 in this labeling of beta times spin number 2 of alpha and spin number 3 of alpha. Now, of course, what I call 1, 2, or 3 will depend on the relative orientation of the neighboring cells. But the idea is the same, that basically, for each pair of neighboring cells, there will be two of these interactions. So again, these no a priori reason to regard this as a perturbation. Both of them clearly carry the same strength for the bond, this parameter K. OK? The justification is only solvability. So the partition function that I have to calculate, which is the sum of all spin configuration with the original weight, I can write as a sum over all spin configurations of e to the minus beta H 0, and then minus u, and the idea of perturbation is, of course, to write the part that depends on u as a perturbation. It's expanding the exponential. Now, solvability relies on the fact that the term that just multiplies 1 is the sum of triplets that are only interacting between themselves. They don't see anybody else. So that's clearly very easily solvable, and we can calculate the partition functions 0 that describes that. And then we can start to evaluate all of those other terms, once I have pulled out e to the minus beta H zero sum over all configurations in Z 0. The series that I will generate are averages of this interaction calculated with the 0 [INAUDIBLE] Hamiltonian. So my log of Z-- OK. That's how if I was to calculate the problem perturbatively. Now I'm going to do something that is slightly different. So what I will do is, rather than do the sum that I have indicated above, I will do a slight variation. I will sum only over configurations. Maybe I should write it in this fashion. I will sum only over configurations that under averaging give me a particular configuration of site spins. So, basically, let's say I pick a configuration in which this is-- the cell spin is plus, the cell spin is minus, whatever, some configuration of cell spins. Now, depending on this cell spin being plus, there are many configurations-- not many-- but there are four configurations of the site spins that would correspond to this being plus. There are four configurations that would correspond to this. So basically, I specify what configuration of cell spins I want, and I do this sum. So the answer there is some kind of a weight that depends on my choice of [INAUDIBLE] sigma alpha [INAUDIBLE]. OK. So then I have the same thing over here. And then, in principle, all of these quantities will become a function of the choice of my configuration. OK. So this is a weight for cell configurations once I average out over all site configurations that are compatible with that. So if I take the log of this, I can think of that as an effective Hamiltonian that operates on these variables. And this is what we have usually been indicating by the prime interactions. And so, if I take the log of that expression, I will get log of z 0 that is compatible with the choice of cell spins. And then the log of this series-- we've seen this many times-- it starts with minus U 0 as a function of the specified interactions. And then I have the variance, again compatible with the interactions. OK? So now it comes to basically solving this problem. And I pick some particular cell. And I look at what configurations I could have that are compactible for a particular sign of the cell spin. As far as the site spins are concerned. And I will also indicate what the weight is that I have to put for that cell coming from minus beta H0. one thing that I can certainly do is to have the cell spin plus. And I indicated that that I can get either by having all three spins to be plus or just the majority, which means that one of the three can become minus. So there are these configurations that are consistent, all of them, with the sigma alpha prime, the majority that is plus. And there are four configurations that correspond to minuses we shall obtain by essentially flipping everything. And the weights are easy to figure out. Basically I have a triplet of spins that are coupled by interaction K. In this case, all three are positive. So I have three into the K factor, so I will get e to the 3K. Whereas if one of them becomes minus and two remain plus, you can see that there are two unhappy misaligned configurations. So I will get minus K, minus K, plus K. I will get e to the minus K. It doesn't matter which one of these three it is. If all three are minuses, then again, the sites are aligned. So I will get e to the 3K. If two minuses and 1 plus, 2 bonds are unhappy. One is happy. I will get e to the minus K, e to the minus K, e to the minus K. So once I have specified what my cell spin is, the contribution to the partition function is obtained by summing over contributions things that are compatible with that. So what is it if I specify that my cell spin is plus? The contribution to the partition function is e to the 3K plus e to the 3 to the minus K. It's actually exactly the same thing if I had specified that it is minus. So we see that this factor, irrespective of whether the choice here is that the cell spin is plus or minus, is log of e to the 3K plus 3e to the minus K. Pair any one of the cells and how many cells I have, 1/3 of the number of sites with the number of sites I had indicated by N, this would be N over 3. OK. Now, let's see what this U average is. So U-- I made one sign error. I put both of them as minuses, which means that in the notation I had-- no. That's fine. Minus U0. So I put the minus sign here -- is plus K sum over all pairs that are neighboring each other, for example, like the pair alpha beta I have indicated, but any other pair of neighboring cells. I have to write an expression such as this. So I have the K. I have sigma. I have beta 1 sigma alpha 2 plus sigma beta 1 sigma alpha 3. So this is the expression that I have for you. I have to take the average of this quantity, basically the average of the sum. I will have two of these averages. Now in my zero to order weight, there is no coupling between this cell and any other cell. So what the spin on each cell is on average cares nothing about what the spin is on any other cell, which means that these averages are independent of each other. I can write it in this fashion. So all I need to do is to calculate the average of one of these columns, given that I have specified what the cell is. So let's pick, let's say sigma alpha 1 average in this zero to order weight. Now I can see immediately that I will have two possibilities, the top four or the bottom four. The top four corresponds to sigma cell being plus. The bottom four correspond to the sigma cell being minus. So the top four is essentially I have to look at the average on this column. I have-- it is either plus, and then I get a weight e to the 3K. Or it is minus. I get weight e to the minus K plus e to the minus k plus e to the minus k. So once I add 2 plus e to the minus K and subtract 1e to the minus K, I really get this. Now, of course, I have to normalize by the weights that I have per cell. And the weights are really these factors but divided by e to the 3K plus 3 to the minus K. Whereas if I had specified that the cell spin is minus, and I wanted to calculate the average here, I would be dealing with these numbers. You can see that I will have a minus e to the 3K. I will have one plus and two minuses e to the minus K, so I will get minus e to the minus K e to the 3K plus 3e to the minus K, which is the normalizing weight. So it's just minus the other one. And I can put these two together and write it as e to the 3K plus e to the minus K e to the 3K plus 3e to the minus K sigma alpha prime. So the average of any one of these three site spins is simply proportional to what you said was the cell spin. The constant of proportionality depends on K according to this. So now if I substitute this over here, what do I find? I will find that minus U at the lowest level, each one of these factors will give me the same thing. So this K becomes 2K. I have a sum over alpha and betas that are neighboring. Each one of these sigmas I will replace by the corresponding average here. Add the cost of multiplying by one of these factors. And there are two such factors. So I basically get this. So add this order in the series that I have written, if I forget about all of those things, what has happened? I see that the weight that governs the cell spins is again something that only couples the nearest neighbor cells with a new interaction that I can call K prime. And this new interaction, K prime, is simply 2K into the 3K plus e to the minus K e to the 3K plus 3 e to the minus K squared. So presumably, again, if I think of the axes' possible values of K running all the way from no coupling at 0 to very strongly coupling at infinity, this tells me under rescaling where the parameters go. If I start here, where do I go back and forth? So let's follow the path that we followed in one dimension. We expect something to correspond to essentially no coupling at all. So we look at the limit where K goes to 0. Then you can see that what is happening here is that K prime is 2K. And then from here, when K goes to 0, all of these factors become 1. So numerator is 2, denominator is 4. The whole thing is squared. So basically in that limit, the interaction gets halved. So if I have a very [INAUDIBLE] coupling of 1/8, then it becomes 1/16 and then it becomes 1 over 32. I get pulled towards this. So presumably anything that is here will at long distance look disordered, just like one dimension. But now let's look at the other limit. What happens when K is very large, K goes to infinity? Then K prime is-- well, there's the 2K out front, but that's it. Because e to the 3K is going to dominate over e to the minus K when K is large, and this ratio goes to 1. So we see that if I start with a K of 1,000, then I go to 2,000 to 4,000, and basically I get pulled towards a behavior of infinity. So this is different from one dimension. In one dimension, you were always going to 0. Now we can see that in this two-dimensional model, recoupling disappears, goes to no coupling. Strong enough coupling goes to everybody's following in line and doing the same thing at large scale. So we can very well guess that there should be some point in between that separates these two types of flows. And that is going to be the point where I would have KC, or let's call it-- I guess I call it K star in the notes. So let's call it K star. So K star is 2K star e to the 3K star plus e to the minus K star e to the 3K star plus e to the minus-- 3e to the minus K star squared. So we can drop out the K star. You can see that what you have to solve is e to the 3K star plus e to the minus K star divided by e to the 3K star plus 3e to the minus K star. This ratio is 1 over square root of 2. I can multiply everything by e to the plus K star so that this becomes 1. And I have an algebraic equation to solve for e to the 4K star. So I will get root 2e to the 4K star plus root 2 is e to the 4K star plus 1. And I get the value of K star, which is 1/4 log of-- whoops, this was a 3-- 3 minus root 2 divided by root 2 minus 1. You put it in the calculator, and it becomes something that is of the order of 0.233. No, 27. So yes? AUDIENCE: So, first thing, do we want to name what is the length factor by which we change the characteristic length? PROFESSOR: Absolutely. Yes. So the next-- yeah. AUDIENCE: But we never kind of bothered to do it so far. PROFESSOR: We will need to do immediately. So just hold on a second. The next thing, I need this B factor. But it's obvious, I have reduced the number of degrees of freedom by 3, so the length scale must have in two dimensions increased by square root of 3. And you can do also the algebra analogous to this to convince you that the distance, let's say, from the center of this triangle to the center of that triangle is exactly [INAUDIBLE]. AUDIENCE: Also, when you're writing the cumulant expansion-- PROFESSOR: Yes. AUDIENCE: In all of our previous occasions when we did perturbations, the convergence of the series was kind of reassured because every perturbation was proportional to some scalar number that we claimed to be small, and thus series would hopefully converge. PROFESSOR: Right. AUDIENCE: But In this case, how can you be sure that for modified interaction and renormalized version, you don't need [INAUDIBLE]? PROFESSOR: Well, let me first slightly correct what you said before I think you meant correctly, which is that previously we had parameters that we were ensuring were small. That did not guarantee the convergence of the series or the lattice. In this case, we don't have even a parameter that we can make small. So the only thing that we can do, and I will briefly mention that, is to basically see what happens if we include more and more terms in that series and we compare results and whether there is some convergence or not. Yes? AUDIENCE: Can you explain again how we got the K prime equation? PROFESSOR: OK. So I said that I have some configuration of the cell spins. Let's say the configuration is plus plus minus plus. Whatever, some configuration. Now there are many configurations of site spins that correspond to that. So the weight of this configuration is obtained by summing over the weights of all configurations of site spins that are compatible with that. And that was a series that we had over here. And K prime, or the interaction, typically we put in the exponent, so I have to take a log of this to see what the interactions are. The log has this series that starts with the average of this interaction. OK? So this was the formula for U. It's over here. And then here, it says I have to take an average of it. Average, given that I have specified what the cell spins are. And I see that that average is really product of averages of site spins. And I was able to evaluate the average of a site spin, and I found that up to some proportionality constant, it was the cell spin. So if the cell spin is specified to be plus, the average of each one of the site spins tends to be plus. If the cell spin is specified to be minus, since I'm looking at this subset of configuration, the average is likely to be minus. And that proportionality factor is here. I put that proportionality factor here, and I see that this average is a product of neighboring cell spins, which are weighted by this factor, which is like the original weights that you write, except with a new K. Yes? AUDIENCE: So after renormalization, we get some new kind of lattice, which is not random. It's completely new. Because what you did here is you take out certain cells-- PROFESSOR: Yeah. AUDIENCE: And call them [INAUDIBLE]. PROFESSOR: Right. But what is this new lattice? This new lattice is a triangular lattice that this is rotated with respect to the original one. So it's exactly the same lattice as before. It's not a random lattice. AUDIENCE: Yes. But on the initial lattice, you specified that these cells would contribute to-- PROFESSOR: Yes. I separated K and K prime. Yes. K and U, yes. AUDIENCE: OK. So if you want to do a renormalization group again, we'll need to-- PROFESSOR: Yeah. Do this. AUDIENCE: Again [INAUDIBLE]. PROFESSOR: Exactly. Yeah. But we do it once, and we have the recursion relation. And then we stop. AUDIENCE: Yeah. PROFESSOR: OK. Yes? AUDIENCE: Is this possible for other odd number lattices? Will you still preserve the parameter? PROFESSOR: Yes. It's even possible for square lattices with some modification, and that's what you'll have in one of the problems. OK? Fine. But the point is-- OK. So I stopped here. So K star was 0.27, which is the coupling that separates places where you go to uncorrelated spins, places you go to everything ordered together. It turns out that the triangular lattice is something that one can solve exactly. It's one of the few things. And you'll have the pleasure of serving that also in a problem set. And you will show that KC, the correct value of the coupling, is something like 0.33. So that gives you an idea of how good or bad this approximation is. But the point in any case is that the location of the coupling is not that important. We have discussed that it is non-universal. The thing that maybe we should be more interested in is what happens if I'm in the vicinity of this, how rapidly do I move away? And actually I have to show that we are moving away. But because of topology, it's more or less obvious that it should be that way. So what I need to do is evaluate this at K star. OK. Now you can see that K prime is a function of K. So what you need to do is to take derivatives. So thers' some algebra involved here. And then, once you have taken the derivative, you have to put the value of K star. And here, some calculator is necessary. And at the end of the day, the number that you get, I believe, is something like 1.62. Yes. And so that says since being it's larger than 1, that you will be pushed away. But these things have been important to us as indicators of these exponents. In particular, I'm on the subspace that has symmetry, so I should be calculating yt here. As was pointed out, important to this step is knowing what the value of B is, which we can either look at by the ratio of the lattice constants or by the fact that I have removed one third of the spins. It has to be root 3 to the power yt. So my yt is log of 1.62 divided by log of root 3. So again you go and look at your calculator, and the answer comes out to be 0.88. Now the exact value of yt for all two-dimensionalizing models is 1. So again, this is an indicator of how good or bad you have done at this ordering perturbation tier. OK. Now, answering the question that you had before, suppose I were to go to order of U squared? Now, order of U squared, I have to take this kind of interaction, which is bilinear. Let's say pair of spins here, multiply two of them, so I will get a pair of spins here and pair of spins there. As long as they are distinct locations when I subtract the average squared, they will subtract out. So the only place where I will get something non-trivial is if I pick one here and one here. And by that kind of reasoning, you can convince yourself that what happens at next order is that in addition to interactions between neighbors, you will degenerate interactions between things that are two apart and things that are, well, three apart, so basically next nearest neighbors and next next nearest neighbors. So what you will have, even if you start with a form such as this, you will generate next nearest neighbor and next next nearest neighbor interactions. Let's call them K, L, M. So to be consistent, you have to go back to the original model and put the three interactions and construct recursion relations from the three parameters, K, L, M, to the new three parameters. More or less following this procedure, it's several pages of algebra. So I won't do it. Niemeijer and van Leeuwen did it, and they calculated the yt at next order by finding the fixed point in this three-dimensional space. It has one relevant direction, and that one relevant direction gave them an eigenvalue that was extremely close to 1. So I don't believe anybody has taken this to next order. You've got good enough, might as well stop. I think it's not going to improve and get better because this is an uncontrolled approximation. So it's likely to be one of those cases, that you asymptotically approach the good result and then move away. Now once I have yt, I can naturally calculate exponents such as alpha. First of all U, which is 1 over yt. 1 over 0.88 is something like 1.13. And the exact result would be the inverse of 1 which is 1. And I can calculate alpha, which is 2 minus d, which is 2 mu. With that value of U, I will get minus 0.26. Again, the correct result would be corresponding to a logarithmic divergence. So this zeroed order, OK. Those things, let's say, for the exponents to 10%, 20%. You would say that, OK, what about other exponents, such as beta, gamma, and so forth. Clearly, to get those exponents, I also need to have yh. OK. So to get yh, I will add, as an additional perturbation, a term which is h sum over i sigma i, which is, of course, the same thing as sum over alpha, sigma alpha of 1 plus sigma alpha 2 plus sigma alpha 3. And if I regard this as a perturbation, you can see that in the perturbative scheme, this would go under the transformation to the average of this quantity, and that the average of this quantity will give me 3 for each cell. So I will get 3h. And for each cell, I will get the average of a site spin, which is related to the cell spin through this factor that we calculated, e to the 3K e to the minus k e to the 3k plus 3e to the minus k sigma alpha prime. So we can see that by generating h prime, which is 3h times e to the 3K plus e to the minus K e to the 3K plus 3e to the minus K. And I can evaluate d to the yh as dh prime by dh evaluated at the fixed point. So I will get essentially 3 times this factor evaluated at the fixed point. But we can see that at the fixed point, this factor is 1 over root 2. So the answer is 3 over root 2. And my yh would be the log of 3 over root 2 divided by log of b that we said is square root of 3. Put it in the calculator, you get a number that is of the order of 1.4. And exact yh is 1.875. So again, once you have yh, you can go and calculate through the exponent scaling relations all the other exponents that you have like beta. So not bad, considering that if you wanted to go through epsilon expansion, how much difficulty you would have. And in any case, we are at two dimensions, which is far away from four. And getting results at 2 is worse than trying to get results at three dimensions. Now we want to do the procedure as an approximation that is even simpler than this. And for that-- so that was the Niemeijer-van Leeuwen procedure. The next one is due to Kadanoff again and Migdal. And it's called bond-moving. And again, we have to do an approximation. You can't do things exact. So let's demonstrate that by a square lattice, which is much easier to draw than the triangular lattice. And let's kind of follow the procedure that we had for the one-dimensional case. Let's say we want to do rescaling by a factor of 2. And I want to keep this spin, this spin, this spin, this spin, this spin and get rid of all of the other spins that I have. Not the circular round, much as I did for the one-dimensional case. And the problem is that if I'm summing over this spin over here, there are paths that connect that spin to other spins. So by necessity, once I sum over all of these spins, I will generate all kinds of interactions. So the problem is all of these paths that connect things. So maybe-- and this is called bond-moving-- maybe I can remove all of these things that are going to cause problem for-- So if I do that, then the only connection between this spin and this spin comes from that side, between this spin and that spin comes from that side. And if the original interaction was K and I sum over this, I will get K prime, which is what I have over there, 1/2 log cos 2K, because the only thing that I did was to connect this site to two neighbors, and then effectively, it's the same thing as I was doing for one dimension. So clearly this is a very bad approximation, because I have reproduced the same result as one dimension for the two-dimensional case. And the reason is that I weakened the lattice so drastically, I removed most of the bonds. So there isn't that much weight for the lattice to order. Kadanoff and Migdal suggested was, OK. Let's not to remove these bonds. Just move them to some place that they don't cause any harm. So I take this bond and I strengthen this bond. I take this bond, strengthen this one. This one goes to this one. Essentially what happens is you can see that each one of the bonds has been strengthened by 2. So I have this because of the strength. So as simple as you can get to construct a potential recursion relation for this square lattice. So this is a way that the parameter K changes, going from 0 to infinity. And we can do the same thing that we did over there. So we can check that for K going zero, if I look at K prime, it is approximately 1/2 log of hyperbolic cosine of something that is close to 0. So that becomes 1 plus the square of this quantity, which is 4K squared over 2. And taking the log of that, it becomes 4K squared. The factor of 4 does not really matter. If K is very small, like 1 over 100, K squared would be 10 to the minus 4. So basically, you certainly have the expected behavior of becoming disordered if you have a [INAUDIBLE] interaction. If you have a strong enough coupling, however, are we different from what we did for the one-dimensional case? Well, the answer is that in this case, K prime is 1/2 log hyperbolic cosine of 4K, starts as e to the 4K plus e to the minus 4K divided by 2. e to the minus 4K I can ignore. So you can see that in this case, I will have 2K. I can even ignore the minus log 2, which previously was so important because previously we had 1 here and now it became 2, which means that if I start to be 10,000, it will become 20,000, 40,000 and now you're going this direction. So again, by necessity almost, I must have a fixed point at some value in between. So I essentially have to solve for K star as K star is 1/2 log cos of 4K star. You can recast this as some algebraic equation in terms of e to the 4k e to the K star and manipulate it. And after you do your algebra, you will eventually come up with a value of K star, which I believe is 0.3. You can ask, well, the square lattice we will solve in class. I said the triangular lattice I will leave for you to solve. KC for the square lattice is something like 0.44. So you are off by about 25%. Of course, again, the quantity that you're interested in is b to t yt. b is 2 in this case. The length scale has changed by a factor of 2, which is dk prime by dk evaluated as K star, again, a combination of doing the algebra of derivatives, evaluating at K star, and then ultimately taking the log to convert it to a yt. And you come up with a value of yt that is around 0.75. And, as I said, the exact yt, which doesn't depend on whether you are dealing with a square lattice or a triangular lattice-- it's only a function of symmetry and dimensionality-- is 1. So you can see that gradually we are simplifying the complexity. Now we could, within this approximation, solve everything within one panel. Now this kind of approximation, again, is not particularly very good. But it's a quick and dirty way of getting results. And the advantage of it is that you can do this not only in two dimensions, but in higher dimensions as well. So let's say that you had a cubic lattice and you were doing rescaling by a factor of 2, which means that originally you had spins along the various diagonals and so forth-- around the various partitions of a square of size 2 x 2, and you want to keep interactions among these and get rid of the interactions among all of the places that you are not interested in. And the way that you do that is precisely as before. You move these interactions and strengthen the things that you have over here. Now whereas the number of bonds that you had to move for the square lattice was 2-- the enhancement factor was 2-- turns out that the enhancement factor in three dimensions would be 4. You essentially have to take one from here, one from there, one from there. So 1 plus 3 becomes 4. And you can convince yourself that if I had done this in d-dimensional hypercubic lattice, what I would have gotten is again the one-dimensional recursion relation, except for this enhancement factor, which is 2 to the power of d minus 1 in d dimensions. Actually, I could even do that for rescaling rather by a factor 2, by a factor of b. And hopefully, you can convince yourself that it will become b to the d minus 1 times 2K. And essentially that factor is a cross section that you have to remove. So the cross-sectional area that you encounter grows as the size to the power of d minus 1. And a kind of obvious consequence of that is that if I go to the limit of K going to infinity, you can see that K prime b would go like b to the d minus 1K. Essentially, if you were to have some kind of a system and you make pluses on one side, minuses on the one side, to break it, then, the number of bonds that you would have to break would grow like the cross-sectional area. So that's where that comes from. It turns out that, again, this approach is exact as we've seen for one dimension. As we go to higher dimensions, it becomes worse and worse. So I showed you how bad it was in two dimensions. If I calculate the fixed point and the exponents in three dimensions compared to our best numerical result, it is off by, I don't know, 40%, 50%, whereas 25% over there. So it gradually gets worse and worse. And so one approach that people have tried to do, which, again, doesn't seem to be very rigorous is to convert this into an expansion on dimensionality of 1. So it's roughly correct that it's going to be close to 1-- correct close to one dimension. But as opposed to the previous epsilon expansion, there doesn't seem to be a controlled way to do this. I showed you how to do this for Ising models. Actually, you can do this for any spin model. So let's imagine that we have some kind of a model that's in one dimension. At each site, we have some variable that as i that I will not specify what it is, how many variables-- how many values it takes. But it interacts only with its neighboring sites. And so presumably, there is some interaction that depends on the two sites. There may be multiple couplings implicit in this if I try to write this in terms of the dot product of spins or things like this. So if I were to calculate the partition function in one dimension-- I already mentioned this last time-- I have to do a sum over what this spin is of e to the K of si si plus 1 and a product over subsequent sites. And if I regard this as a matrix, which is generally called a transfer matrix, you can see that this multiplication involving the sum over all of the spins is equivalent to matrix multiplication. And, in particular, if I have periodic boundary conditions in which the last spin couples to the first spin, I would have trace of T to the power of N, where t is essentially this, e to the K of si si plus 1. Now, clearly I can write this as trace of T squared to the power of N over 2. Right. And this I can regard as the partition function of a system that has half as many spins. So I have performed the renormalization group like what we are doing in one dimension. I have T prime is T squared. And in general, you can see that I can write this as N to the b N over b. So the result of renormalization by a factor of b is simply one dimension to take the matrix that you have and raise it to b power. OK. And so I could parameterize my T by a set of interactions K, like we do for the Ising model. Raise it to the power of b, and I would generate the matrix that I could then parametrize by K prime, and I would have the relationship between K prime and K in one dimension. So this is d equals 1. And the way to generalize this Migdal Kadanoff, RG, to a very general system is simply to enhance the couplings. So basically, what I would write down is T prime, which, after rescaling by a factor of b, is a function of a set of parameters that I will K prime, is obtained by taking the matrix that I have for one set of couplings and raise it to the power of b. This is the exact one-dimensional result. And if I want to construct this approximation in d dimensions, I will just do this. So for a while, before people had sufficiently powerful computers to maybe simulate things as easily, this was a good way to estimate locations and exponents of phase diagrams, critical exponents, et cetera for essentially complicated problems that could have a set of parameters even here. Nowadays, as I said, you probably can do things much more easily by computer simulations. So I guess I still have another 10 minutes. I probably don't want to start on the topic of the next lecture. But maybe what I'll do is I'll expand a little bit on something that I mentioned very rapidly last lecture, which is that in this one-dimensional model where I solve the problem by transfer matrix, what I have is that the partition function is trace of some matrix raised to the N power. And if I diagonalize the matrix, what I will get is the sum over all eigenvalues raised to the N power. Now note that we expect phase transitions to occur, not for any finite system, but only in the limit where there are many degrees of freedom. And, actually, if I have such a sum as this in the limit of very large number of degrees of freedom, this becomes lambda max to the power of N. Now in order to get any one of these series of eigenvalues, what should I do? I should take a matrix, which is this e to the T-- e to the strength of the interactions. What did I write? e to the K of S and S prime. So there is a matrix. All of its elements are Boltzmann weights. There are all positive. And find the eigenvalues of this matrix. Now for the case of the Ising model without a magnetic field, the matrix is 2 x 2. It's e to the K, corresponding to the diagonal terms where the spins are parallel, e to the minus K when the spins are antiparallel. And clearly you can see that the eigenvalues corresponding to 1, 1 or 1, minus 1 as eigenvectors, are hyperbolic cosine of K-- strike this. e to the K plus e to the minus K e to the K minus e to the minus K. You can see that to get this, all I had to do was to diagonalize a matrix that corresponded to one bond, if you like. And just as in here, there is no reason to expect that these eigenvalues, which depend on this set of parameters, should non-analytical functions. There is no reason for non-analyticity as long as you are dealing with a single bond. We expect non-analyticities at their limit of large N. So if each one of these is analytic function of K, the only way spanning the K axis that I could encounter in non-analyticity is if two of these eigenvalues cross, because we have seen a potential mechanism for phase transitions. We discussed this in A333. That if we have sum of contributions, that each one of them is exponentially large in N, and two of these contributions cross, then your partition function will jump from one hill to another hill. And you will have a discontinuity, let's say, in derivatives or whatever. So the potential mechanism that I can have is that if I have as a function of changing one of my parameters or a bunch of these parameters, the ordering of these eigenvalues-- let's say lambda 0 is the largest one, lambda 1 is the next one, lambda 2, et cetera. So each one of them is going its own way. If the largest one suddenly gets crossed by something else, then you willl basically abandon one eigenvalue for another, and you will have a mechanism for a phase transition. So what I told you was that there is a theorem that for a matrix where all of the eigenvalues are positive, this will never happen. The largest eigenvalue will remain non-degenerate. And there is some analog of this, probably you've seen in quantum mechanics, that if you have a potential, the ground state is non-degenerate. The ground state is always a function that is positive everywhere. And the next excitation would have to have a node go from plus to minus. And somehow you cannot have the eigenvalues cross. So in a similar sense, it turns out the largest eigenvalue, the largest eigenvector for these matrices, will have all of the elements positive, and it cannot become degenerate. And so you are guaranteed that this will not happen. Now the second part of this story that I briefly mentioned was you can repeat this for two dimensions, for three dimensions, higher dimensional things. So one thing that you could do is, rather than sorting the Ising model on a line, you can solve it on a ladder, or a ladder that has two rungs. So solving the Ising model on this structure is not very difficult because you can say that there are eight possible values that this can take. And so I can construct a matrix that is 8 x 8 that tells me how I go from the choice of eight possibilities here to the eight possibilities there. And I will have an 8 x 8 matrix that has these properties and will satisfy this. It will be true if I go to a 4 strip. It will be a 16 x 16. No problem. I can keep going. And it would say, well, the two-dimensional-- and also you could do three-dimensional or higher dimensional models-- should not have a phase transition. Well, it turns out that all of this relies on having a finite matrix. And what Onsager showed was that, indeed, for any finite strip, you would have a situation such as this-- actually more accurately if I were to draw a situation such as this, where two eigenvalues would approach but will never cross. And one can show that the gap between them will scale as something like 1 over this length. And so in the limit where you go to a large enough system, you have the possibility of ascension to some singularity when two eigenvalues touch each other. So this scenario is very well known and studied in two dimensions. In higher dimensions, we actually don't really know what happens. OK. Any questions? AUDIENCE: So it appears that there are or there are not phase transitions [INAUDIBLE]? PROFESSOR: Well, we showed-- we were discussing phase transitions for the triangular lattice, for the square lattice. I even told you what the critical coupling is. AUDIENCE: But it seems to me that the conclusion of what you're-- of this part is that there aren't. PROFESSOR: As long as you have a finite strip, no. But if you have a 2-- an infinite strip, you do. So what I've shown you here is the following. If I have an L x N system in which you keep L finite and set N going to infinity, you won't see a singularity. But if I have an N x N system, and I said N goes to infinity, I will encounter a singularity in the limit of N going to infinity. Again, very roughly, one can also should develop a physical picture of what's going on. So let's imagine that you have a system that is a very large number but finite in one direction. And this other direction can be two, can be three, whatever. You have a finite size. But in this other direction, you basically can go as large as you like. Now presumably this two-dimensional model has a phase transition if it was infinite x infinite. And on approaching that phase transition, there would be a correlation length that would diverge with this exponent nu. So let's say I am sufficiently far away from the phase transition that the correlation length is something like this. So this patch of spins knows about each other. If I go closer to the transition, it will grow bigger and bigger. At some point, it will fit the size of the system, and then it cannot grow any further. So beyond that, what you will see is essentially there is one patch here, one patch here, one patch here. And you are back to a one-dimensional system. So what happens is that your correlation length starts to grow as if you were in two dimensions or three dimensions. Once it hits the size of the system, then it has to saturate. It cannot grow any bigger. And then this block becomes independent of the next block. So essentially, you would say that you would have effectively a one-dimensional system where the number of blocks that you have is of the order of N over L. So what we are going to do starting from next lecture is to develop again a more systematic approach, which is a series expansion about either low temperatures or more usefully about high temperature. And then we will take that high temperature expansion and gradually go in the direction to solve these two-dimensionalizing models exactly. And so we will see why I told you some of these results about exact value of KC, exact value yT. Where do they come from? |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 23_Continuous_Spins_at_Low_Temperatures_Part_4.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So we've been looking at the xy model in two dimensions. It's a collection of units spins located on a site of potentially a lattice in two dimensions. Since they are unit vectors, each one of them is characterized by an angle theta i. And the partition function would be obtained by integrating over all of the angles. And the weight that we said had the form k e cosine of theta i minus theta j. So there's a coupling that corresponds to the dot per dot of neighboring spins, which can be written as this cosine form. If we go to low temperatures where k is large, then this was roughly an integral over all configurations of the continuous field, theta of x, with a weight that is the appropriate choice of the lattice spacing, the same k integral of gradient of theta squared. Now if we look at this weight and ask what is happening as a function of changing temperature or the inverse of this parameter so that we are close to zero temperature, if we just work with this Gaussian, the conclusion would be that if we look at the correlation between two spins that are located at a distance r from each other, just from this Gaussian weight, we found that these correlations decay as something like 1/r, or maybe we put some kind for a lattice spacing, to the power of an exponent theta that was related to this k. So the conclusion was that if that is the appropriate weight, we have power-law decay correlations. There is no long range order for correlations decay very weakly. On the other hand, if we start with the full cosine and don't do this low temperature expansion, just go and do the typical high temperature expansion, from the high temperature expansion we conclude that these collections decay exponentially, which are two totally different forms. And presumably, there should be some kind of a critical value of k or temperature that separates a low temperature formed with power-law decay and a high temperature formed with exponential decay of correlations. Now there was no sign of such a kc when we tried to do a low temperature expansion like the non-linear sigma model in this particular case of the xy model that corresponds to n equals to 2. Again, we said that the reason for that is that the only high order terms that I can write down in this theory close to low temperature gradient of theta to the fourth, sixth, et cetera are explicitly irrelevant. And unlike n equals 3, et cetera, it cannot cause any change. So as far as that theory was concerned, all of these corresponded to being fixed points. Then we said that there is a twist that is not taken into account when I make that transformation in the first line, as pointed out by Kosterlitz and Thouless. What you have are topological defects that are left out. And an example of such a defect would be a configuration of spins that are kind of radiating out from some particular point, such that when I complete a circuit going around, the value of spin changes by 2 pi for the gradient at the distance r, gradient of the change in angle should fall off as 1 over r. So if we put one of these defects and calculate what the partition function is for one defect, z of one defect, we would say, OK, what I have to do is to calculate the energy cost of this distortion. If I use that theory that I have over there, I have k/2. I have integral over all space which, because of the symmetry here, I can write as 2 pi r to the r. And then I have the gradient squared. And as I said, the gradient for a topological defect such as this goes as 1/r. It's square goes like 1 over r squared. So this is an integral that is logarithmically divergent. It's a an integral of 1/r. It's logarithmically divergent both at large distances-- let's say of the order of the size of the system-- and has difficulty at short distances. But at short distances, we know that there is some lattice structure, and this approximation will break down when I get to some order of some multiple lattice spacing-- so let's call that a-- and that the additional energy that comes from all of the interactions that are smaller than this core value of a I have to separately calculate, and I'm going to call them beta epsilon that depends on the distance a that I choose for the core. This is just a energy term that goes into the Boltzmann factor. But this defect can be placed any place in a lattice that has size L up to this factor of 1/a that I call the size of the core. So the number of places goes like L/a squared, the area that I'm looking at. And so we see that this expression actually goes like L/a. I have the power 2 here. And from this logarithmic interaction, I will get here power of 2 pi k log L/a, which I can absorb into this. And I define the exponential of the core energy to be some parameter y. That clearly depends on the choice of what I call my core. So if I look at just that expression by itself, I would say that there is the chance to see one defect or no defects as L/a becomes large in the thermodynamic limit depending on whether this exponent is positive or negative. So we are kind of-- oops, I forgot the factor of 2 here. It's 1 over pi k. We are led from this expression that something interesting should happen at some kc for one defect, which is let's call it the inverse of the kc. That is like a temperature, which is pi/2. So maybe around pi/2 here there should be something interesting that happens. In fact, if the theory was independent vertices, I would predict there would be no vertices up to here, and then there would be whole bunches vertices that would be appearing later on. But the point is that these are not vertices that don't interact with each other. There are interactions between them. And if I look at a situation with many vertices, what I need to do is to calculate the partition function for the defects, for the vertices that has a resemblance to the Coulomb system, so we call it z sub q. And this z sub q is obtained by summing over all numbers of these defects that can appear in the system. So let's do a sum over n, starting from 0 to however many. And if I have a situation in which there are n vertices in the system, I clearly have to pay a cost of y per core of each one of them. So there's a factor of y raised to the power of n. These vertices then can be placed anywhere on the lattice, in the same way that I had this factor of L/a for a single vertex. I will have ability to put each one of these vertices at some point on the lattice. And then we found that when you do the calculation, the distortions that are caused by the independent vertices clearly add up on top of each other. And when we add it up and superimpose the gradient of thetas that correspond to the different vertices and calculated the integral of gradient of theta squared, what we found was that there was an interaction between them that I could write as 4 pi squared k sum over distinct pairs, i less than j, qi qj. The Coulomb interaction [? it fits inside ?] xi and xj. And actually, the form of this came about as follows. Those That basically the charges of these topological defects are multiples of 2 pi, but these qi's I have written are minus plus 1. So the two pis are absorbed in the charges. The actual charges being 2 pi is what makes this 4 pi squared. k is the [INAUDIBLE] of interaction. Furthermore, we have to require the system to be overall neutral because, otherwise, there would be a large energy for creating the monopole in a large system. And again, just as a matter of notation, our z of x has a 1 over 2 pi itself log of the displacement in units of this a because we can't allow these things to come very close to each other. So our task was to calculate properties encoded in this partition function, which is, in some sense, a grand canonical system of charges that can appear and disappear. And our expectation is that at the low temperatures, essentially all I have are a few dipoles that are kind of small. As I go to higher temperature, the two monopoles making the dipole can fluctuate and go further from each other. And eventually at some point, they will be all mixed up together and the picture should be regarded as a mixture of plus and minus charges in the plasma. Yes? AUDIENCE: [INAUDIBLE] if we have an external field, would this also be fixed [INAUDIBLE], like and edge or something external? PROFESSOR: It is very hard to imagine what that external field has to be in the language of the xy model, because what you can do is you can put a field that, let's say, rotates the spins on one side-- let's say to point down-- spins on the other side to point up. But then what happens is that the angles would adjust themselves so that at 0 temperature, you would have a configuration that would go from plus to minus, and all topological charges would be on top of that base configuration. So that kind of field certainly does not have any effect that I can ascribe over here. If I change my picture completely and say forget about the xy model, think about this as a system of point charges, then I can certainly, like I did last time, put an electric field on the system and see what happens. Yes? AUDIENCE: [INAUDIBLE] saying that you can create even bigger defects, where q would be [INAUDIBLE] plus minus 1, plus minus bigger integer? But that's [? discounted ?] as high [INAUDIBLE] effect. PROFESSOR: Yes. So intrinsically, we could go beyond that. We would have a fugacity for a creation of cores of singular charge, another for cores of double charge, et cetera. You expect those y's or double charges to be much larger because the configuration is going to be more difficult at the core. And in some sense, you can imagine that we are including something similar to that because we can create two single charges that are closing off to each other. Yes? AUDIENCE: So why is a Coulomb state [INAUDIBLE]? Is that high order? PROFESSOR: Well, you would expect that if y is a small parameter, you would like to create as few things as possible. The reason you create any is because you have entropy gain. So I would say energetically, even creating a pair is unfavorable. But the pair has lots of places that it can go, so because of the gain in entropy, it is willing to accept that. If I create a quadrupole, you say, well, I break the quadrupole into two dipoles and then I have much more entropy. So that's why it's not-- we can have that term, but it is going to have much less weight. So I won't repeat the calculation, but last time we indeed asked what happens if we have, let's say, some kind of an electric field. And because of the presence of the electric field, dipoles are going to be aligned. And the effect of that is to reduce the effective strength of all kinds of Coulomb interactions. We found that the effective strength was reduced by from k by an amount that was related to the likelihood of creating the dipole of size r. And that was clearly proportional to y squared into the minus from there, 4 pi squared q log of r/a. That's the co-ability we create a dipole of this size. And then I have to, in principle, integrate over all dipole sizes. But this writing this as an orientationally independent result is not correct because in the presence of an electric field, you have more likelihood to be oriented in one direction as opposed to the other direction. So this factor of e to the cosine theta, et cetera, that we expanded, first of all, gave us an average of cosine of theta squared. So there was a factor of 1/2 here. Rather than full rotation, I was doing an average of cosine squared to be that factor. Expanding that actually gave me a factor of 4 pi squared k because of the Coulomb term that I had up there. And then I had essentially the polarizability of one of these objects that goes like r squared. Again, coming from expanding this factor that we had in the exponents. Actually, I calculated everything in units of A, so I should really do this. And this was correct to order of y squared. And in principle, one can imagine that there are configurations of four charges, quadrupole, like things, et cetera, that further modify this. And that this is a result that I have to-- oops, this was 1 minus. It was an overall factor of k. This was the correction term that we calculated. And then the size of these dipoles, we have to integrate from A to the size of the system or, if you like, infinity. And although we were attempting to make an expansion in powers of y, what we see is that because this is giving me a factor of k/r, the r has to be integrated against these three factors of rdr. Whether or not this integral is dominated by its upper cut-off, and hence divergent, depends on value of k that is related to the same divergence that we have for a single vertex. So this perturbation theory is, in principle, not valid no matter how much y I try to make small as one long as my k inverse is greater than pi/2. So what we decided to do was not to do this entire integration that gives us infinity, but rather to recast this as a re-normalization group in which core size is changed from a to beta. Now one way to see the effect of it-- last time, I did this slightly differently-- is to ensure that the result for the partition function of one charge is unmodified. If I simply do this change, the weight should not change for one defect. And so clearly, you can see that there's a change in power of beta that I would that I need to compensate by changing the core energy by a factor of b to the power of 2 minus pi k. So the statement that I have for z 1, in order for z 1 to be left invariant, I have to rescale the core energies by this factor. And then over here, I essentially just integrate up to a factor of da. Just get rid of those interactions. And so this becomes minus 4 pi q k squared y squared-- as we're looking at dipole contribution-- integral from a to ba of dr r cubed divided a to the fourth. a/r to the power of 2 pi k. It means that I probably made a mistake somewhere. Yeah, this has a 2 pi. I forgot the 2 pi from the definition of the log. So these are the recursion relations. So basically, this same results at large scale for the Coulomb gas can be obtained either by the theory that is parametrized by y and original k, or after going through this removal of short distance degrees of freedom by theory in which y is modified by this factor and k is modified by this factor. And as usual, we can change these recursion relations into flow equations by choosing the value of b that is very close to 1, and then essentially converting these things to y evaluated at a slightly larger than 1, and from that, constructing the y by dl. And y by dl simply becomes 2 minus pi k times y. And I can do the same thing here. This is k plus dk dl. The k part cancels. And what I will get is that dk by dl is minus 4 pi cubed k squared y squared. And actually, all I need to do is evaluate this on the shell where r equals to a, and you can see that the integral essentially gives me 1. It gives you a delta l, basically goes over here. So these are order of y squared. Actually, it is kind of better to cast results rather in terms of k, in terms of k inverse, which is kind of like a temperature variable. And then what we get is that d by dl of k inverse essentially is going to be minus 1 over k squared by dl of k squared. So the minus k squared cancels, and it simply becomes 4 pi q y squared, order of y to the fourth. And divide by dl, it's actually 2 minus pi k y plus O of y. So these are the equations that describe the changing parameters under rescaling for this Coulomb gas. And so we can plot them. Essentially, we have two paramters-- y, and we have k inverse. And what we see is that k inverse, its change is always positive. So the flow should always be to the right. y, whether y decreases or decreases depends on whether I am above or below this critical value of 2/pi that we keep encountering. And in particular, what we will find is that there is a trajectory that goes into this point. And if you are to the left of that trajectory, y is getting smaller, k inverse is getting larger. And so you go like this. Eventually, you land on a point down here where y has gone to 0. And if y has gone to 0, then k inverse does not change. So you have a structure where you have a line of fixed points so any point over here is a fixed point, but it is also a stable fixed point. It is true that points that are over here, if you are exactly at y equals to 0 are fixed points. But as soon as you have a little bit of y, then they start flowing away. And essentially, the general pattern of flow is something like this. So I go back to my original xy model, and I'm at some value at low temperatures-- means that I'm down here-- but presumably, there's a finite cost for creating the core, so I may be over here. And when I go to slightly higher temperature, k inverse becomes larger. But the core energy typically becomes smaller also at lower temperatures because everything is scaled by scale by 1/kt. So as I go to higher and higher temperatures, my xy model presumably goes through some trajectory. The trajectory of changing the xy model as temperature is modified has nothing to do with [? rg ?]. So basically is xy model on increasing T. And what is happening in the xy model of increasing T is that at low temperatures, I'm at some point here, which if I look at larger and larger scales, I find that eventually I go to a place where the effective whole cost for creating vertices is so large that they are not created at all. So then I'm back to that theory that has no vertices and simply gradient squared, and I expect that correlations will be given by this power law type of form. However, at some point, I am in this region. And when I'm in this region, I find that maybe even initially the core energy goes down or y goes down. But eventually, I end up going to a regime where both y is large and effective temperature-- the inverse-- are large. So essentially anywhere here eventually at large scales, I will see that I will be creating vertices pretty much at ease and at sufficiently long, large scale. My picture should be that of a plasma in which the [? plas ?] plus and minus charges are moving around. And so then there is this transition line that separates the two regimes. So let's find the behavior of that. And clearly, what I need to do is to focus in the vicinity of this fixed point that controls the transition. That is, anything that undergoes the transition eventually comes and flows to the vicinity of this point. So what we can do is we can construct, if you like, a two dimensional blow-up of that. And what I'm going to do is to introduce a variable of x, which is k inverse minus 2/pi. Essentially, how far I have gone from this in this direction-- y, I can use as y [INAUDIBLE]. And so what we see is that my k inverse is 2/pi, my critical value. AUDIENCE: I think that should be a pi/2. PROFESSOR: k inverse this is pi/2. Thank you. Which means that this has to be pi/2. This has to be pi/2. And this I can write as pi/2, 1 plus 2x/pi. So that to lowest order in x, k is 2/pi. The inverse of this factor, which is 1 minus 2x/pi plus order of x squared. I'm expanding for small x. I put that value in here and I find that my dy by dl is now 2 minus pi times what I have over there. So it is pi times 2/pi-- so it becomes 2-- plus 4/pi x. So essentially, I have minus 4/pi squared. I multiple by pi. It becomes plus 4/pi x. Multiply by y. So this is simply a 4/pi xy. Now the point is that typically we are used to expanding in the vicinity of that important fixed point. And all the cases that we had seen so far, once we did that expression, we ended up with a linear behavior-- divide by dl plus something times y. Here we see that the vicinity of this point is clearly a quadratic type of behavior. And this quadratic behavior leads to some unusual and interesting critical behavior that we are going to explore. So let's stick with this a little bit longer. We can see that if I look at d by dl of y squared, it is going to be 2y divided by dl, so I have to multiply this by 2y, so I will get 8/pi pi xy squared. Why did I do that? It's because you can recognize his xy squared shortly. Let's go and do the x by dl. The x by dl is simply dk inverse by dl, so that is 4 pi cubed y squared. And now you can see that if I do d by dl of x squared, I will have 2x dx by dl, so I will have 8 pi cubed xy squared. So now we can recognize that these two quantities up to some factor of pi to the fourth are really the same thing. So from here, we conclude that d by dl of x squared minus pi to the fourth y squared. Essentially once I do that, I will get 0. So as I go along these trajectories, x and y are changing, but the combination x squared minus pi to the fourth y squared is not changing. So all of the trajectories that I have drawn-- at least sufficiently close at this point around which I am expanding-- correspond to lines that are x squared minus pi to the fourth y squared is some constant I'll call c. And that constant must meet whatever you started with. So if I call the trajectory here to be the combination x0 by x0-- your original values-- I can figure out what my x0 to the fourth minus pi the fourth x0 to the fourth is. And that's going to be staying constant along the entire trajectory. So these trajectories are, in fact, portions of a hyperbole. And this is the equation that you would have for a hyperbola in xy. Now clearly there are two types of hyperbole-- the ones that go like this and the ones that go like that. In fact, this one and this one are pretty much the same thing. And what distinguishes this pattern versus that pattern is whether this constant c is positive or negative because you can see that out here, ultimately you end up at the point where y has gone to 0. So depending on x positive or negative, it doesn't matter. This combination will be positive. So throughout here, what I have is that c is positive. Whereas what I have up here is c that is negative. Presumably, there is other trajectory here and down here, c is again positive. And again, if you want to ensure y over here c is negative, because over here you can see you crossed the line where x is 0, but you have some value of y. So if I were to blow up that region both as a function of x and y, well, first of all, I will have a particular set of trajectories-- the ones that end up at this important fix point, which correspond clearly to c cos to 0. So that c cos to 0 will give me two straight lines. So presumably there is this straight line, and then there is another straight line goes out there. And then I have this bunch of trajectories that are these hyperbole that end up over here. I can have hyperboles that will be going out. And then I will have hyperboles that are like this. This are all in the high temperature phase, so let's [INAUDIBLE] like this. So one thing that you immediately see is that the location of the transition that is given by this critical line when c equals to 0. So statement number one that we can get is that the transition line corresponds to c equals to 0. So solving for x as a function of y, I will get that x critical is either minus or plus pi squared y. Clearly from the figure, the solution that I want is the one that corresponds to minus. My x was k inverse minus pi/2. This is kc inverse minus pi/2. And so what I see is that kc inverse-- the correct transition temperature-- is, in fact, lower than the value of pi/2 that we had deduced, assuming that there is only a single vertex in the entire system by an amount that to lowest order is related to the core energy or core fugacity. And presumably, there are higher order terms that I haven't calculated. So this number that we have calculated by looking at a single defect, we can see that in the presence of multiple defects, starts to get lower. And this is precisely correct in the limit where y is a small quantity. Now that's a transition line. We can look to the left or to the right. Let us just look at the low temperature phase. So for the low T phase, we expect c to be negative. And I can, for example, make that explicit by writing it as-- OK, so c was x0 squared minus pi to the fourth y0 squared-- what the starting parameters of the system dictate. If you are under low temperature phase such that c is negative, it means that you are at temperatures that are smaller than Tc. So let's see, T minus Tc, let's write it as Tc minus T-- would be positive. But this has to be negative, so let me just introduce some parameter b. It's not the same b as here. Just some coefficient that has to be squared. So I know that as I hit Tc, this c goes to 0. If I'm slightly away from Tc along the trajectory that I have indicated over here, right here I'm 0, so right I'm slightly negative. And there's no reason why the value that I calculated from x0 squared minus pi to the fourth y0 squared should not be an analytical function. So I have expanded that analytical function, knowing that that Tc is equal to 0. There will be higher order terms for sure, but this is the lowest order term that I would have in that expansion. And as we said, this is preserved all along the trajectory that ends on this point. So along that trajectory, this is the same as x squared minus pi to the fourth y squared, which means that I can write y squared to be 1 over pi to the fourth. x squared plus b squared Tc minus T. So if I want to solve for this curve, that's what I will have for some value of this quantity. And what I do is I look at what that implies for the xy dl. The xy dl is 4 pi cubed y squared. I substitute the y squared that I have over there. I will get 4/pi times x squared plus b squared Tc minus T. So under rescaling, this tells me what is happening to x. And in particular, what I can do is to integrate this equation. I have dx divided by x squared plus b squared Tc minus T is 4/pi dl-- just rearranging this differential equation. And this I can certainly integrate out to l. This you should recognize as the differential form of the inverse tangent up to a factor of 1 over b square root of Tc minus T. So I integrate this. And on the other side, I have 4/pi l. So eventually, I know that-- that's what I wanted to do? I needed to do this later on, but we'll use it later on. What I needed to get is what is the eventual fate of this differential equation. Eventually, we see that this differential equation arrives at the point that I will call x infinity. When it arrives at x infinity, this is 0 and y is 0, so I immediately know that x infinity-- I didn't need to do any of that calculation-- this expression has to be 0, is minus the square root of Tc minus T. So let me figure out what I did with the signs that is incorrect. AUDIENCE: [INAUDIBLE] temperature [INAUDIBLE] positive [INAUDIBLE]. PROFESSOR: In the low temperature phase, I have indeed stated that c has to be positive, which means that this coefficient better be positive, which means that I would have a minus sign here. And then x would be b times Tc minus T. Right, so this would be plus or minus. The plus solution is somewhere out here, which I'm not interested. The solution that I'm interested corresponds to this value. You say, well, what is important about that? You see that various properties of this low temperature phase are characterized by this power law as opposed to exponential behavior. The power law is determined by the value of k where the description in terms of this gradient squared theory is correct. Now out here, the description is not correct because I still have the topological difference. But if I look at sufficiently large distances, I see that the topology defects have disappeared. But by the time the topological defects have disappeared, I don't have the original value of k. I have a slightly different value of k. So presumably, the properties are going to be described by what value of this k inverse is of large behaviors. And so what I expect is that the effective behavior of this k inverse-- actually, the effective behavior of k-- as a function of whatever the temperature of the system is. We expect that in the original xy model, or any system that is described by this behavior, there is a critical temperature, Tc, such that at higher temperatures, correlations are decaying exponentially. So essentially, the effective value of k has gone to 0. There is no stiffness parameter. So basically at high temperatures, you should be over here. What I see is that the effective value of k, however, other is meaningful all the way to the inverse of 2/pi. So there is a value here at 2/pi which corresponds to the largest temperature or the smallest k that is acceptable. Now what I see is that on approaching the transition, the value of k-- I have it up there-- is 2/pi. This limiting value that we have over here. And then there is a correction that is 4 over pi squared x. And presumably here, I have to put the x in infinity. And what I have for the x infinity is something like that, so I will get 2/pi plus 4 e over pi squared square root of Tc minus T. So the prediction is that the effective value of k comes to its limiting value of 2/pi with a square root singularity. So we can replace the theory that describes anything that is in this universality class in the temperature phase by an effective value of k. If we then ask how does that effective value of k change as a function of temperature, the prediction is that, well, at very low temperature, it's presumably inversely related to temperature. It will come down, but [INAUDIBLE] it will change its behavior, come with a square root singularity to a number that is 2/pi, and then jump to 0. Now you are justified in saying, well, this is all very obscure. Is there any way to see this? And the answer is that people have experimentally verified this, and I'll tell you how. So a system that belongs to this universality class and we've mentioned all the way in the class is the superfluid. We've said that the superfluid transition is characterized by a quantum order parameter that applies a magnitude, but then it has a phase theta. And roughly, we would say that the phase theta should be described by this kind of theory at low temperatures. So if we want basically a two dimensional system, what we need to do is to look at the superfluid field. And this is something that Bishop and Reppy did in 1978 where they constructed the analog of the Andronikashvili experiment that we mentioned in 8333, applied it to the field. So let me remind you what the Andronikashvili experiment was. Basically, you will have a torsional oscillator. This torsional oscillator was connected to a vat that had helium in it. So basically, this thing was oscillating, and the frequency of oscillations was related to some kind of a effective torsion of constant k divided by some huge mass which is contained within the cylinder. So basically you can probe classically-- you would say there's some kind of a density here. You can calculate what the mass is if you know what this is, you know what the omega is. Now what he noticed was that if this thing was filled with liquid helium and you went below Tc of helium, then suddenly this frequency changed. And the reason was that the mass that was rotating along with this whole thing was changed because the part that was superfluid was sitting still, and the normal part was the part that was oscillating. So the mass that was oscillating was reduced, frequency would go up. And from the change in frequency, he could figure out the change in the density of the part that was oscillating, and hence calculate what the density of the normal part was. So what Bishop and Reppy did was to make this two dimensional. How did they make it two dimensional? Rather than having a container of helium, what they did was they made, if you like, some kind of a toilet paper. They call it a jelly roll my Mylar. So it was Mylar that was wrapped in a cylinder. And then the helium was absorbed between the surfaces of Mylar. So effectively, it was a two dimensional system in this very setup. So for that two dimensional system, they-- again, with the same thing-- they measure the change in frequency. They found that if they go to low enough temperatures, suddenly there is a change in frequency. Of course, the temperature that they were seeing in this case was something like 1 degrees Kelvin or a fraction of 1 degree Kelvin, whereas when you have the full superfluid, it's 2.8 degrees Kelvin clearly because of that dimensionality the critical temperature changes. But you would say that's not particularly the inverse. So they could measure the change in frequency and relate the change in frequency to the density that became superfluid. Now how does the superfluid density tell us anything about this curve? Well, the answer is that everything is going to be weighted by something like e the minus beta times some energy. The one part of the energy that is associated with oscillations is certainly the kinetic energy. So let's see what we would write down for beta times the kinetic energy of superfluid or superfluid film. What I have to do is beta will give me 1/kt. The kinetic energy is obtained by integrating mass times velocity squared, or density integrated against velocity. It's a two dimensional film, so we sort of integrate as we go along the film. The superfluid velocity can be related to the mass of helium h bar and the gradient of this phase of the superconducting order parameter. So you can, for example, write your weight function as sidebar into the i theta of x, calculate what the current is using the usual formula of h bar over m psi star [? grat ?] psi [? minus ?] psi [? grat ?] psi star, and you would see that effective mass is something like this. So this is going to give me rho over kt h bar over n helium 4 squared integral gradient of theta squared, which you can see is identical to the very first line that I wrote down for you. And we can see that k can be interpreted as rho-- kt h bar over n squared. So all of these quantities-- h bar m, you know. T is the temperature that you're measuring. Rho you get through the change in this frequency. And so then they can plot what [INAUDIBLE] rho is as a function of temperature. And we see that it's very much related to k. And indeed, they find that the rho that they measure has some kind of behavior such as this. And then they go and change their Mylar, make the films thicker or whatever. They find that the transition temperature changes so that a different type of film would show a behavior such as this. And thicker films will have a higher critical temperature. They do it for a number of film thicknesses, and they got things' behavior such as this, and found that this behavior followed this gray line, which is exactly what is predicted from here. It was predicted that rho c over Tc should kb m over h bar squared times what the critical value of k is that we've calculated to be 2/pi So they could precisely check these 2/pi that we've calculated. They could more or less see this square root approach to this singularity. I'm not sure the data at that point were good enough so that they could say this exponent is precisely 1/2. So this was for the low temperature phase. What can I say about the high temperature phase? So in the high temperature phase is where my c is negative. So there I can write x0 squared minus pi to the fourth y0 squared as being a negative number, which I will write as minus b squared T minus Tc. So I have now T that is greater that Tc, multiply with some constant, and I get this. And this is the same all along the trajectory. So as I go further and x and y change, they will change in a manner that is consistent with this, which implies that as x changes with l and y changes with l, the two of them will be related by y squared is being x squared plus b squared T minus Tc divided by pi to the fourth. So this is where I don't really see the endpoint of the trajectory. I just want to see how the trajectory is behaving. So I go back to this equation, dx by dl is 4 pi cube y squared. Substitute that y squared, I will get 4/pi 1 over x squared plus b squared T minus Tc. And then I rearrange this in a form I can see how to integrate. dx x squared plus b squared T minus Tc is 4/pi dl. I integrate the left-hand side. And as I already jumped ahead, it is the inverse tangent of x divided by b square root of T minus Tc is 4/pi times l. So what do I want to do with this expression? So what I want to do is to see the trajectories that just cross to the high temperature side. So I start with a point that is just slightly to the right of this transition line. Presumably, what is happening is that I will follow the transition trajectory for a long while, then I will start to head out, which means that for this trajectory, if I look at the system over larger and larger scales, initially I find that it becomes harder and harder to create these topological defects. The core energy for them becomes large. The fugacity for them becomes small. But ultimately I manage to break that, and I go to a regime where it becomes easier and easier to create these topological defects. And presumably at some point out here, everything that I have said I have to throw out because I'm making an expansion, assuming that y is small, x is small, et cetera. So presumably, as I integrate, I come to a point where I say, OK, differential equations break down. But my intuition tells me that I have reached the regime where I can create pretty much plus and minus charges at ease. So I would say that once I have reached that region where x and y have managed to escape the region where they are small, they have become of the order of 1. Maybe you can put them 1/3, 1/4-- it doesn't matter. Once they have become something that is not infinitesimal, then I can create these charges more or less at will. I will have a system where I have lots of charges that can be created at ease. And my intuition tells me that in that system, I shall have this kind of decay-- exponential decay. So how far did I have to go in order to reach that value of l? I have to go to a correlation length for a size that is larger than 1 what I started with by a factor of e to the l where the value of x became something that is of the order of 1. And actually, you can see from here that if I'm very close to the transition, it doesn't matter whether I choose here to be 1/10, 1/2, even 1/100. As long as I'm close enough to Tc, I'm dividing something by something that is close to 0. And this is tan inverse of a large number. And tan inverse of a large number is tan inverse of 90 degrees. So essentially, I go to some value of x where I can approximate this by pi over 2. And you can see that that is really insensitive to what I choose to be my x's as long as I'm sufficiently close to the critical point. So you can see that once I have done that, I have figured out what my l star is, if you like. And if I substituted that over there, I will get a behavior that is about pi/4 times pi/2 times 1 over the square root of T minus Tc. Now these coefficients out front are not that important. What you see is that, indeed, we get a correlation length that, as we approach Tc, diverges, but it is not that at all of any of the forms that we had seen before. So typically, we wrote that the correlation lens diverges T minus Tc to some exponent minus mu. This is not that type of divergence. It's a very different type of divergence. And again, it's root is in the non-linear version of the recursion undulations that we have. The closest thing to this that we have is when we were calculating the correlation length for the non-linear sigma model where we had something that had a 1 over temperature type of behavior. This is even more complicated. Now once you know the singular behavior of the correlation length, you would say that in the two dimensional system, the singular part of the free energy should scale like c to the minus 2. Essentially, you break your system into pieces that are of the size correlation length. The number of those pieces is l over xi squared, because you are in two dimensions. So you would get this. So that says that your singularity of the free energy is something like, I don't know, pi squared 8 4b square root of T minus Tc. Again, not a popular singularity. It's an essential singularity. And essential singularity is a kind of singular function that no matter how many derivatives you take, as T comes to Tc, there is no singularity. So for example, if I take two derivatives to get the heat capacity, what I would plot as a function of T at Tc should have no signatures. So basically what you would see because if this is that the curve just continues. There is no signature of a transition at this heat capacity. And indeed, people later on, they did numerical simulations, et cetera. What they find is that the heat capacity actually has kind of smooth peak a little bit later than Tc, which is the location where there's lots and lots of vortex unbinding going, gone. But at Tc itself, there is no signature of a singularity. As far as I know, there's no experimental case with this correlation length as we observed. So the lesson that we can take from this particular system is that two dimensional system are kind of potentially interesting and different. We had this Mermin-Wagner Theorem that we mentioned in the beginning that said that there should be no true long range order than two dimensions. That is still true. But despite that, there could be phase transitions with quite observable consequences. Add a particular type of transition in two dimension that we will pursue next lecture-- so I'll give you a preview-- is that of melting. So the prototype of a phase transition you may think of is either liquid gas or a liquid solid. And you can say, well, you have studied phase transitions to such a degree. Why not go back and talk about the melting transition, for example? The reason is that the straight melting transition is typically first order. And we've seen that universality and all of those things emerge when you have a diverging correlation length. So you want to have a place where there is a possible potential for continuous phase transition. And it turns out that melting in two dimensions provides that. So in two dimensions, you could have a bunch of points that could, for example, in a minimum energy configuration at T close to 0 form a triangular lattice. Now when you go to finite temperature-- as we discussed, again, at the very first lecture-- you will start to have distortions around this. We can describe these distortions to effect of u of x and y. And then go to that appropriate continuum limit that describes the elasticity of these things. And it is going to look very much like that gradient of theta squared term that we wrote at the beginning, except that since this u is a vector, as we saw, even for an isotropic material, you will have the potential for having multiple elastic constants. But modeled on that, the conclusion that you would have is that as long as it is OK for me to make an expansion that is like the elastic theory-- some kind of a gradient of u expansion-- the conclusion would be that the correlations in u will grow logarithmically as a function of size. And you will not have too long range of order, but you will have some kind of a power of behavior, such as the one that we have indicated over there. On the other hand, when you go to very high temperature, presumably this whole thing melts. There is no reason to have correlations beyond a few atoms that are close to you. Add so typically at high temperature, correlations will decay exponentially. So this is low temperature expansion is elastic theory expansion that we have written down has to break down also in this case. And a particular mechanism for its breakdown in two dimensions is to create these topological defects, which in the case of solid will correspond to these location lines that, for example, correspond to adding an addition row of particles here terminating at some point. And we can go through exactly the same kind of story as we had before and conclude that these dislocations, because of the competition between their energy cost soaring logarithmically and their entropy gain growing logarithmically, we need to unbind at the critical temperature. And so that provides a mechanism for describing the melting of two-dimensional materials in a language that is very similar to this, except for the complications that have to do with this being a vector rather than the scale of quantity. And so what we find is that these topological charges are different from minus 2 plus 2 pi. The interactions between them is a particular version of the Coulomb interaction, but that many of the other results go through. And we will get an idea of what happens when the solid melts because of the unbinding of these locations. But there is a puzzle that we will find if we're not melting to a liquid, but into something which is more like a liquid crystal. So [INAUDIBLE] did they discover also something about liquid crystals. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 1_Collective_Behavior_from_Particles_to_Fields_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Alright. So what we are doing is covering 8.334, which is statistical physics. ANd let me remind you that basically statistical physics is a bridge from microscopic to macroscopic perspectives. And I'm going to emphasize a lot on changing perspectives. So at the level of the micro, you have the microstate that is characterized, maybe, by a collection of momenta and coordinates of particles, such as particles of gas in this room. Maybe those particles have spins. If you are absorbing things on a surface, you may have variables that denote the occupation-- whether a particle or site is occupied or not. And when you're looking at the microscopic world, typically you are interested in how things change as a function of time. You have a kind of dynamics that is governed by some kind of a [INAUDIBLE] which is dependent on the microstate that you are looking at, and tells you about the evolution of the microstate. At the other extreme, you look at the world around you and the macro world. You're dealing with things that are described by completely different things. For example, if you're thinking about the gas, you have the pressure. You have the volume. So somehow, the coordinates managed to define and macrostate, which is characterized by a few parameters in equilibrium. If you have something like a collection of spins, maybe you will have a magnetization. And again, another important thing that characterizes the equilibrium we describe as temperature. And when you're looking at things from the perspective of macro systems in equilibrium, then you have the laws of thermodynamics that govern constraints that are placed on these variables. So totally different perspectives. And what we have is that we need to build a bridge from one to the other and this bridge is provided by statistical mechanics. And the way that we described it in the previous course is that to what you need is a probabilistic prescription. So rather than following the time evolution of all of these degrees of freedom, you'll have a probability assigned to the different microstates that are dependent on the macro state. And for example, if you are dealing in the canonical ensemble, then you know you're at the particular temperature this form has E to the minus beta, this Hamiltonian that we you have here governing the dynamics. Beta was one over KT. So I could write it in this fashion. And the thing that enabled us to make a connection between this deterministic perspective and this equilibrium description wire probabilities abilities was relying on the limit where the number of degrees of freedom was very large. And this very large number of degrees of freedom enabled us to, although we had something that was probably realistic, to really make very precise statements about what was happening. Now, when we were doing this program in 8-333, we looked at very simple systems that were essentially non interacting, like the ideal gas. Or we put a little bit of weak perturbation and by some manipulations, we got things like liquids. But the important thing is that where is this program we could carry out precisely when we had no interactions. In the presence of interactions we encountered, even in our simplified perspective, new things. We went from the gas state to a liquid state. We didn't discuss it, but we certainly said a few things about solids. And clearly there are much more interesting things that can happen when you have interactions. You could have other phases of matter that we didn't discuss, such as liquid crystals, superconductors, and many, many other things. So the key idea is that you can solve things that are not interacting going their own ways, but when you put interactions, you get interesting collective behaviors. And what we want to do in 8-334, as opposed to just building the machinery in 8-333, is to think about all of the different types of collective behavior that are possible and how to describe them in the realm of classical systems. So we won't go much into the realm of quantum systems, which is also quite interesting as far as its own collective behaviors is concerned. So that's the program. Now, what I would like to do is to go to the description of the organization of the class, and hopefully this web page will come back online. So repeating what I was saying before, the idea is that interactions give rise to a variety of interesting collective behaviors, starting from pretty much simple degrees of freedom such as atoms and molecules and adding a little bit of interactions and complexity to them, you could get a variety of, for example, liquid crystal phases. But one of things to think about is that you think about all of the different atoms and molecules that you can put together, and the different interactions that you could put among them-- you could even imagine things that you construct in the lab that didn't exist in nature-- and you put them together, but the types of new behavior that you encounter is not that much. At some level, you learn about the three phases of matter, gas, liquids, solids, of course, as I said, there are many more, but there are not hundreds of them. There are just a few. So the question is why, given the complexity of interactions that you can have at the microscopic scale, you put things together, and you really don't get that many variety of things at the macroscopic level. And the answer has to do with mathematical consistency, and has at its heart something that you already saw in 8-333, which is the central limit theorem. You take many random variables of whatever ilk and you add them together, and the sum has a nice simple Gaussian distribution. So somehow mathematics forces things to become simplified when you put many of them together. So the same thing is happening when you put lots of interacting pieces together. New collective behaviors emerge, but because of the simplification, in the same sense the type of simplification that we see with the central limit theorem, there aren't that many consistent mathematical descriptions that can emerge. Of course, there are nice new ones, and the question is how to describe them. So this issue of what are possible consistent mathematical forms is what we will address through constructing these statistical field theories. So one of the important things that I will try to impose upon you is to change your perspective. In the same way that there is a big change of perspective thinking about the microscopic and macroscopic board, there is also a change of perspective involved in the idea of starting from interacting degrees of freedom, and changing perspective and constructing a statistical future. And I'll give you one example of that today that you're hopefully all familiar with, but it shows very much how this change of perspective works, and that's the kind of metallurgy that we will apply in this course. Basically the syllabus is as follows. So initially I will try to emphasize to you how, by looking at things, by averaging over many degrees of freedom, at long lens scales and time scales, you get simplified statistical field theory descriptions. The simplest one of these that appears in many different contexts is this Landau-Ginzburg model that will occupy us for quite a few lectures. Now once you construct a description of one of these statistical field theories, the question is how do you solve it. And there are a number of approaches that we will follow such as mean field theory, et cetera. And what we'll find is that those descriptions fail in certain dimensions. And then to do better than that, you have to rely on things such as perturbation theory. So this is kind of perturbation theory that builds upon the types of perturbation theory that we did in the previous semester, what is closer to the kind of perturbation theory that you would be doing in quantum field theories. Alongside these continuum theories, I will also develop some lattice models, that in certain limits, are simpler and admit either numerical approaches or exact solutions. The key idea that we will try to learn about is that of Kadanoff's perspective of renormalization group and how you can, in some sense, change your perspective continuously by looking at how a system would look over larger and larger landscape, and how the mathematical description that you have for the system changes as a function of the scale of observation, and hopefully becomes simple at sufficiently large scales. So we will conclude by looking at a variety of applications of the methodologies that we have developed. For example, in the context of two dimensional films, and potentially if we have time, a number of other systems. The first thing that I'm going-- the rest of the lecture today has essentially nothing in common with the material that we will start to cover as of the second lecture. But it illustrates this change in perspective that is essential to the way of thinking about the material in this course. And the context that I will use to introduce this perspective is phonons and elasticity. So you look around you, there's a whole bunch of different solids. There's metals, there's wood, et cetera. And for each one of them, you can ask things about their thermodynamic properties, heat content, for example, heat capacity. They are constructed of various very different materials. And so let's try to think, if you're going to start from this left side from the microscopic perspective, how we would approach the problem of the heat content of these solids. So you would say that solids, if I were to go to 0 temperature before I put any heat into them, they would be perfect crystals. What does that mean? It means that if I look at the positions of these atoms or particles that are making up the crystal-- I guess I have to be more specific since a metal is composed of nuclei and electrons-- let's imagine that we look at the positions of the ions. Then in the perfect position they will form a lattice where I will pick three integers, L M N and three unit vectors, and I can list the positions of all of these ions in the crystal. So this combination L M N that indicates the location of some particular ion in the perfect solid that's indicated by, let's say, the vector r. So this is the position that I would have ideally for the ion at zero temperature that is labelled by r. Now, of course when we go to finite temperatures, the particles, rather than forming this nice lattice, let's imagine a square lattice in two dimension, starts to move around. And they're no longer going to be perfect positions. And these distortions we can indicate by some vector u. So when we have perfect crystals across deformations, this Q star changes to a new position, Q of r, which is it's ideal position plus a distortion field u, at each location. OK? Now associated with these change of positions, you have moved away from the lowest energy configuration. You have put energy in the system. And you would say that the energy of the system is going to be composed of the following parts. There's always going to be some kind of a kinetic energy. That's sum over r p r squared over 2 m. Let's imagine particles all have the same mass m. And then the reason that these particles form this perfect crystal was presumably because of the overlap of electronic wave functions, et cetera. Eventually, if you're looking at these coordinates, there's some kind of a many body potential as a function of the positions of all of these particles. Now, what we are doing is we are looking at distortions that are small. So let's imagine that you haven't gone to such high temperatures where this crystal completely melts and disappears and gives us something else. But we have small changes from what we had at zero temperature. So presumably the crystal corresponds to the configuration that gives us the minimum energy. And then if I make a distortion and expand this potential in the distortion field, the lowest order term proportional to various u's will disappear because I'm expanding around the minimum. And the first thing that I will get is from the second order term. So I have one half sum over the different positions. I have to do the second derivative of the potential with respect to these deformations. Of course, if I am in three dimensions, I also have three spatial indices, x, y, and z, so I would have to take derivatives with respect to the different coordinates, alpha and beta, and summed over them. And then I have u alpha of r, u beta of r prime. And then, of course, I will have higher order terms in this expansion. This is a general potential, so then in higher order terms, it would be order of u cubed and higher. OK? Fine? So I have a system of this form. Now, typically, the next stage if I stop at the quadratic level-- this I would do for a molecule also, not only for a solid-- is to try to find the normal modes of the system. Normal modes I have to obtain by diagonalizing this matrix of second derivatives. Now, there are a few things that I know that help me, and one of the things that I know is that because the original structure was a perfect solid, let's say, then there will be a matrix-- sorry, there will be an element of the second derivative that corresponds to that r and this r prime. That's going to be the same as the second derivative that connects these two points, because this pair of points is obtained by the first pair of points by simple translation among the lattice. The environment for this is exactly the same as the environment for this. So essentially, what I'm stating is that this function does not depend on r and r prime separately, but only on the difference between r and r prime. So, I know a lot-- so this is not if I had n atoms in my system, this is not something like n squared over 2 independent things, it is a much lower number. The fact that I have such a lower number allows me to calculate the normal modes of the system by Fourier transform. I won't be very precise about how we perform Fourier transforms. Basically I start with this k alpha beta, which is a function of separation. I can do a sum over all of these separations r of e to the i K dot r, for appropriately chosen k. And summing over all pairs of differences, so the argument r here now, is what was previously r minus r prime. So basically, what I can do is I can pick one point and go and look at all of the separations from that point. Construct this object, this will give me the Fourier transformed object that depends on the vague number k. So if I look at the potential energy of this system minus its value at zero temperature, which from one perspective was one half sum over r r prime alpha beta, is k alpha beta, r minus r prime, u alpha of r, u beta of r prime, in the quadratic approximation. If I do Fourier transforms what happens because it is only a function of r minus r prime, and not r and r prime separately, is that in Fourier space, it separates out into a sum that depends only on individual K modes. There's no coupling between k and k prime. So here we have r and r prime, but by the time we get here, we just have one k, alpha and beta. We'll do one example of that in more detail later on. I have the Fourier transformed object, and then I have u alpha k Fourier transform. So in the same manner that I Fourier transformed this kernel, k alpha beta, I can put here a u and end up here with a U tilde and u tilde beta of k star. So we start over here, if you like. If I have n particles with a matrix that is n by n-- actually 3n by 3n if I account for the three different orientations-- and by going to Fourier transforms, we have separated it out for each of n potential Fourier components, we just have a three by three matrix. And so then we can potentially diagonalize this three by three matrix to get two eigenvalues, lambda alpha of K. Once I have the eigenvalues of the system, then I can find frequencies or eigenfrequencies. Omega alpha of K, which would be related to this lambda alpha of K divided by m. So you go through this entire process. The idea was you start with different solids. You want to know what the heat content of the solid is. You have to make various approximations to even think about the normal modes. You can see that you have to figure out what this kernel of the interaction is-- Fourier transform it, diagonalize it, et cetera And ultimately thing that you're after is that there are these frequencies as a function of wave number. Actually, it's really a wave vector, because there will be three different-- Kx, Ky and Kz. And at each one of these K values, you will have three eigenfrequencies. And presumably as you span K, you will have a whole bunch of lines that will correspond to the variations of these frequencies as a function of K. Why is that useful? Well, the reason that it's useful is that as you go to high temperature, you put the energies into these normal modes and frequencies. That's why this whole lattice is vibrating. And the amount of energy that you have put at temperature T on top of this V0 that you had at zero temperature, up to constants of proportionality that I don't want to bother with is a sum over all of these normal modes that are characterized by K and alpha, the polarization and the wave vector. And the amount of energy, then, that you would put in one harmonic oscillator of frequency omega. And that is something that's we know to be h bar omega alpha of K, divided by e to the beta h bar omega alpha of K minus 1. So the temperature dependence then appears in this factor of beta over here. So the energy content went there, and if you want to, for example, ultimately calculate heat capacity, I have to calculate this whole quantity as a function of temperature. And then take the derivative. So it seems like, OK, I have to do this for every single solid, whether it's copper, aluminium, wood, or whatever. I have to figure out what these frequencies are, what's the energy content in each frequency. And it seems like a complicated engineering problem, if you like. Is that it about this that transcends having to look at all of these details and come to this? And of course, you know the answer already, which is that if I go to sufficiently low temperature, I know that the heat capacity due to phonons for all solids goes like t cubed. So somehow, all of this complexity, if I go to low enough temperature, disappears. And some universal law emerges that is completely independent of all of these details-- microscopics, interactions, et cetera. So our task-- this is the change of perspective-- is to find a way to circumvent all of these things and get immediately to the heart of the matter, the part that is independent of the details. Not that the details are irrelevant. Because after all, if you want to give some material that functions at some particular set of temperatures, you would need to know much more than this t cubed law that I'm telling you about. But maybe from the perspective of what I was saying before-- how many independent forms there are, in the same sense that adding up random variables always gives you a Gaussian. Of course, you don't know where the mean and the variance of the Gaussian is, but you are sure that it's a Gaussian form. So similarly, there is some universality in the knowledge that, no matter how complicated the material is, its low temperature heat capacity is t cubed. Can we get that by an approach that circumvents the details? So I'm going to do that. But before, since I did a little bit of hand-waving, to be more precise, let's do the one-dimensional example in a little bit more detail. So my one-dimensional solid is going to be a bunch of ions or molecules or whatever, whose zero temperature positions is uniformly separated by some lattice spacing, a, around one dimension. And if I look at the formations, I'm going to indicate them by the one-dimensional distortion Un of the nth one along this chain. So then I would say, OK, the potential energy of this system minus whatever it is at zero, just because of the distortion, I will write as follows-- it is a sum over n. And one thing that I can do is to say that if I look at two of these things that are originally at distance a, and then they go and the deform by Un and Un plus 1, the additional deformation from a is actually Un plus 1 minus Un. So I can put some kind of Hookean elasticity and write it in this fashion. Now of course, there could be an interaction that goes to second neighbours. So I can write that as K2 over 2, Un plus 2, minus Un squared and third neighbors, and so forth. I can add as many of these as I like to make it as general as possible. So in some sense, this is a kind of rewriting of the form that I had written over here, where these things that where a function of the separation-- these K alpha beta of separation or these K1, K2, K3, et cetera-- in this series that you would write down. Now if you go to Fourier space, what you can do is each Un of Un, is the distortion in the original perspective, you can Fourier transform. And write it as a sum over K e to the ik position the nth particle is na, times u tilde of k. And once you make this Fourier transform in the expression over here, you get an expression for V minus V0 in terms of the Fourier modes. So rather than having an expression in terms of the amplitudes u sub n, after Fourier transform, I will have an expiration in terms of u tilde of k. So let's see what that is. Forget about various proportionality. I have the sum over n. Each one of the Un's I can write in this fashion in terms of U tilde of K. Since this is a quadratic form, I need to have two of these. So I will have the sum of k and k prime. I have the factor of 1/2. Each Un goes with a factor of e to the i nak. But then I had 2 Un. There's a term here, if I do the expansion, which is Un squared. So I have one from k and one from k prime. However, if I have Un plus 1 minus Un, what I have is e to the ika minus 1. I already took the contribution that was e to the ink. From the second factor, I will get e to the ik prime a minus 1. This multiplies K1 over 2. And then I will have something that's K2 over 2, e to the 2ika minus 1, e to 2i k prime a minus 1, and so forth. Multiplying at the end of the day U tilde of k, U tilde of k prime. Now when I do the sum over n, and the only n dependence appears over here, then this is the thing that forces k and k prime to add up to 0, because if they don't add up to 0, then I'm adding lots of random phases together. And the answer will be 0. So essentially, this sum will give me a delta function that forces k plus k prime to be 0. And so then the additional potential energy that you have because of the distortion ends up being proportional to 1/2, sum over the different k's. Only one k will remain, because k prime is forced to be minus k. And so I have U of k, U of minus k, which is the same thing as U of k complex conjugate. So I will get that. And then from here, I will get K1. Now then, k prime is set to minus k. And when I multiply these two factors, I will get 1 plus 1 minus e to the ika minus e to the minus ika. So I will get 2 minus 2 cosine of ka. And then I will have K2, 2 minus cosine of 2ka, and so forth. Why are the lights not on? OK, still visible. So yes? AUDIENCE: So in your last slide, you have an absolute value Uk, but wouldn't it be-- right above it, is it U of k times U star of k prime, or how does that work? PROFESSOR: OK, so the way that I have written is each Un I have written in terms of U tilde of k. And at this first stage, the two factors of Un that I have here, I treat them completely equivalently with indices k and k prime. So there is no complex conjugation involved here. But when k prime is set to be minus k, then I additionally realize that if I Fourier transform here, I will find that U tilde of minus k is the same thing as U tilde of k star. Because essentially, the complex conjugation appears over here. It's not that important a point. The important point is that we now have an expression for our frequencies, or omega alpha of k-- actually, there's no polarization. It's just omega of k-- are related to square root of something like a mass down here. Again, that's not particularly important, but something like k1, 2 minus 2 cosine of ka, k2, 2 minus 2 cosine of 2ka, and so forth. So I can plot these frequencies omega as a function of k. One thing to note is first of all, the expression is clearly symmetric under k goes to minus k, so it only depends on cosine of k. So it is sufficient to draw one side. The other side for negative k would be the opposite. The other thing to note is that again, if I do this Fourier transformation, and I have things that are spaced by a, it effectively means that the shortest wavelength that I have to deal with are of the order of k, which means that the wave numbers are also kind of limited by something that I can't go beyond. So there's something that in the generalized over here, you recall that your k vectors are within the Brillouin zone. In one dimension, the Brillouin zone is simply between minus 5 over a and 5 over a. Now the interesting thing to note is that as k goes to 0, omega goes to 0. Because all of these factors you can see, as k goes to 0, vanish. In particular, if I start expanding around k close to zero, what I find is that all of these things are quadratic. They go like k squared. So when I take the square root, they will have an absolute value of k. So I know for sure that these omegas start with that. What I don't know, since I have no idea what k1, k2, et cetera are, is what they do out here. So there could be some kind of a strange spaghetti or going on over here, I have no idea. There's all kinds of complexity. But they are away from the k close to 0 part. Again, why does it go 0? Of course, k equals to 0 corresponds to taking the entire chain and translating it. And clearly, I constructed this such that all of the U's are the same-- I take everything and translate it-- there's no energy cost. So there's no energy costs for k equals to 0. The energy costs for small k have to be small. By symmetry, they have to be quadratic in case, so I take the square root and we will get a linear. Of course, you know that this linear part-- we can say that omega is something like a sum velocity. So all of these chains, when I go to low enough case or low enough frequencies, admit these sound-like waves. Now heat content-- what am I supposed to do? I'm supposed to take these frequencies, put them in the expression that I have over here, and calculate what's going on. So again, if I want to look at the entirety of everything that is going on here, I would need to know the details of k2, k3, k4, et cetera. And I don't know all of that. So you would say I haven't really found anything universal yet. But if I look at one of these functions, and plot it as a function of the frequency, what do I see? Well, omega goes to 0. I can expand this. What I get is kt. Essentially, it's a statement that low frequencies behave like classical oscillators. A classic oscillator has an energy kt. Once I get to a frequency that is of the order of kt over h bar, then because of the exponential, I kind of drop down to 0. So very approximately, I can imagine that this is a function that is kind of like a step. It is either 1 or 0. And the change from 1 to 0 occurs at the frequency that is related to temperatures by k2 over a. So if I'm at some high temperature, up here, and I want to-- so this omega is k v t sum i over h bar-- that's the corresponding high frequency-- I need to know all of these frequencies to know what's going on for the energy content. But if I go to lower and lower temperatures, eventually I will get to low enough temperatures where the only thing that I will see is this linear portion. And I'm guaranteed that I will see that. I know that I will see that eventually. And therefore, I know that eventually, if I go to low enough temperatures, the excitation energy becomes low enough, it's simply proportional to this integral from 0. I can change the upper part of the infinity if I like. dk h bar 1k divided by e to the beta h bar dk minus 1, And again, dimensionally, I have two factors of k here. Each k scales with kt. So I know that whole thing is proportional to kt squared. In fact, there's some proportionality constants that depend on h bar, t, et cetera. It doesn't matter. The point is this t squared. So I know immediately that my heat capacity is proportional to-- derivative of this is going to be proportional to t. The heat capacity of a linear chain independent of what you do. So no matter what the state of interactions is, if I start with a situation such as this at 0 temperature, I know if I put energy into it at low enough temperature, I would get this heat capacity that is linear. I don't know how low I have to do. Because how I have to go to depends on what this velocity is, what the other complication is, et cetera. So that's the part that I don't know. I know for sure of the functional form, but I don't know the amplitude of that functional form. So the question is, can we somehow get this answer in a slightly different way, without going through all of these things? And the idea is to do a coarse grain. So what's going on here? Why is it that I got is form? Well, the reason I got this form was I got to low enough temperature. At low enough temperature, I have only the possibility of exciting emote, whose frequencies were small. I find that frequencies small correspond to wave numbers k that are small, or they correspond to wavelengths that are very large. So essentially, if you have you solid, you go to low enough temperature, you will be exciting modes of some characteristic wavelength that are inversely proportional to temperature, and become larger and larger. So eventually, these long wavelength modes will encompass whole bunches of your atoms. So this lambda becomes much larger than the spacing of the particles in the chain that you were looking at. And what you're looking at. low temperature, is a collective behavior that encompasses lots of particles moving collectively and together. And again, because of some kind of averaging that is going on over here, you don't really care about the interactions among small particles. So it's the same idea. It's the same large n limit appearing in a different context. It's not the n that becomes very large, but n that becomes of the order of, let's say, 100 lattice spacings-- already much larger than an individual atom doing something, because it's a collection of atoms that are moving together. So what I drew here was an example of a mode. But I can imagine that I have some kind of a distortion in my system. Now, I started with the distortions Un that were defined at the level of each individual atom, or molecule, or variable that I have over here. But I know that things that are next to each other are more or less moving together. So what I can do is I can average. I can sort of pick aa distance-- let's call it the x-- and average of all of those Un's that are within that distance and find how that averages. And as I move my interval that I'm averaging, I'm constructing this core function U of x. So there is a moving window along the chain constructed with the Vx which is much larger than a, but is much less than this characteristic frequency. And using that, I can construct a distortion field. I started discrete variables. And I ended up with a continuous function. So this is an example of a statistical field. So this distortion appears to be defined continuously. But in fact, it has much less degrees of freedom, if you like, compared to all of the discrete variables that I started with. Because this continuous function certainly does not have, when I fully transform it, variations at short length scales. So we are going to be constructing a lot of these coarse grained statistical fields. If you think about the temperature in this room, bearing from one location to another location pressure, so that we don't strike sound waves, et cetera. All of these things are examples of a continuous field. But clearly, that continuous field comes from averaging things that exist at the microscopic level. So it's kind of counter-intuitive that I start with discrete variables, and I can replace them with some continuous function. But again, the emphasis is that this continuous function has a limited set of available and surveyed numbers over which it is defined. OK. So we are going to describe the system in terms of this. So that analog of this potential that we have over here is some b function of this U of x. And I want to construct that function. And so the next step after you decided what your statistical field is to construct some relevant thing, such as an potential energy that is appropriate with that statistical field, putting as limited amount of information as possible in construction of that. So what are the things that we are going to put in constructing this function at. The first thing that I will do is I will assume that there is a kind of locality. By which, I mean the following. While this is in principle the function of the entire function, locality means that I will write it as an integral of some density, where the density at location x that I'm integrating depends on you at that location. But not just you, also including derivatives of you. And you can see that this is really a continuum version of what I have written here, this. If I go to the continuum, this goes like a derivative. And if I look at further and further distances, I can construct higher and higher derivatives. So in the sense that this is a quite general description, I can construct any kind of potential in here by choosing interactions. K1, K2, K3, K100 that go further and further apart, you would say that if I include sufficiently high derivatives here, I can also include interactions that are extending over far away distances. The idea of locality is that while you make this expansion, our hope is that at the end of the day, we can terminate this expansion without needing to go to many, many higher orders. So locality is two parts. One, that you will write it in this form. And secondly, that this function will not depend on many, many high derivatives. The second part of it is symmetries. Now, one of the things that I constructed in here and ultimately was very relevant to the result that I had was that if I take a distortion U of x and I add a constant to everybody, so if I replace all of my Uns to plus Un plus 5, for example, the energy does not change. So V of this is the same thing as V of U of x. So that's a symmetry. Essentially, it's this translation of symmetry that I was saying right here at the beginning, that this only depends on the separation of two points. It's the same thing. But what that means is that when you write your density function, then the density cannot depend on U of x. Because that would violate this. So you can only be start with things that depend on the U of V IBX this second, et cetera. So this is 1. This is 2. Another thing is what I call stability. You are looking at distortions around a state that corresponds to being a stable configuration of your system. What that means is that you cannot have any pairs in this expansion that are linear. So again, this was implicit in everything that we did over here. We went to second order. But there's third order, et cetera, are not ruled out. It is more than that. Because you require the second order terms to have the right sign, so that your system corresponds to being at the bottom of a quadratic potential rather than the top of it. So there is a bit more than the absence of linear terms. So given that, you would say that your potential for this system as a function now of this distortion is something like an integral over x. And the first thing that is consistent with everything we have written so far is that if we be proportional to du by dx squared. So there's a coefficient that I can put here. Let's call it k over 2. It cannot depend on you. It has to be quadratic function of derivative. That's the first thing I can write down. I can certainly write down something like d2u by dx to the fourth power. And if I consider higher order terms, why not something like the second derivative squared, first derivative squared, a whole bunch of other things. So again, there's still many, many, many terms that I can't write down. Yes? Is that second term supposed to be a second derivative to the fourth power? PROFESSOR: Yes. Thank you. So that when I fully transform this, the quadratic part becomes sum over k, k over 2. This fully transformed, becomes k squared. This fully transformed, as you said, is second derivative squared. So it becomes k to the fourth. I have a whole bunch of terms. And then I have U of k squared. And then I will have higher order terms from Fourier fully transform of this. Yes? AUDIENCE: Does this actually forbid odd derivatives? Are you saying the third derivative and stuff don't-- PROFESSOR: I didn't go into that, because that depends on some additional considerations, whether or not you have a mirror symmetry. If you have a mirror symmetry, you cannot have terms of that are odd and x. Whether or not you have some condition on U and minus U may or may not forbid third order terms in [? U ?] [? by the ?] x. So once I go beyond the quadratic level, I need to rely on some additional symmetry statement as to which additional terms I am allowed to write down. Yes? AUDIENCE: Also the coefficients could depend on x, right? PROFESSOR: OK. So one of the things that I assumed was this symmetry, which means that every position in the crystal is the same as any other position. So here, if I break and make the coefficient to be different from here, different from there, it amounts to the same thing, that the starting point was not the crystal. AUDIENCE: Shouldn't that be written as-- in the place where you wrote down the symmetry, it should be U of x plus c inside the parenthesis? PROFESSOR: No. No. So look at this. So if I take Un and I replace Un to Un plus 5, essentially I take the entire lattice and move it by a distance. Actually, 5 was probably not good. 5.14. It's not just anything I can put over here. Energy will not change. OK? AUDIENCE: That must be different from adding to all the ends a constant displacement. PROFESSOR: Ends are labels of your variables. So I don't know what mean by-- AUDIENCE: OK, you're right. But in that picture, where instead of N's, we have X? PROFESSOR: Yes. AUDIENCE: It seems like displacing in space would mean adding up to x. PROFESSOR: No. No. It is this displacement. I take U1, U2, U3, U4. U1 becomes U1 plus .3. U2 becomes U2 plus .3. Everybody moves in step. AUDIENCE: So the conclusion is the coefficients don't depend on x? PROFESSOR: If you have a system that is uniform-- so this statement here actually depends on uniformity. This is an additional thing, uniform. So one part of the material is the same. Now, you have non-uniform systems. So you take your crystal and you bombard it with neutrons or whatever. Then you have defects all over the place. Then one location will be different from another location. You are not able to write that anymore. So uniformity is another symmetry that I kind of implicitly used. Yes? AUDIENCE: When your uniform is a separate part, why isn't it implied by translational symmetry? PROFESSOR: If I take this material that I neutron bombarded, and I translate it in space, it's internal energy will still not change, right? AUDIENCE: OK. OK. PROFESSOR: So again, once I come to this stage, what it amounts to is that I have constructed a kind of energy as a function of a deformation field, which in the limit of very long wavelengths has this very simple form which is the integral du by dx squared. There are higher order terms. But hopefully in the limit of long wavelengths, the higher derivatives will disappear. In the limit of small deformation, the higher order terms will disappear. So the lowest order term at long wavelengths, et cetera, is parametrized by this 1k. If I fully transport it, I will just get k over 2k squared. When I take the frequency that corresponds to that, I will get that kind of behavior. So by kind of relying on these kinds of statements about symmetry, et cetera, I was able to guess that. Now, let's go and do with this for the case of material in three dimensions. Actually, in any dimensions, in higher dimensions. So I take a solid in three dimensions, or maybe a film in 2 dimensions. I can still describe it's deformations at low enough temperature in terms of long wavelength modes. I do the coarse graining. I have U of x. Actually, both U and x are now vectors. And what I want to do is to construct a potential that corresponds to this end. OK? So I will use the idea of locality. And I write it as an integral over however many dimensions I have. So d is the dimensionality of space of some kind of an energy density. And the energy density now will depend on U. Actually, U has many components, U alpha of x. Derivatives of U-- so I will have du alpha by dx beta. And you can see that higher order derivatives, they're really more and more indices. So the complication now is that I have additional indices involved. The symmetry that I will use is a slightly more complicated version of what I had before. I will take the crystal, U of x, and I can translate it just as I did before. And I just say that this translation of crystal does not change its energy. But you know something else? I can take my crystal, and I can rotate it. The internal energy should not change. So there is a rotation of this that also you can put the c inside of. Before or after rotation, it doesn't matter. The energy should not depend on that. OK. So let's see what we have to construct. I can write down the answer. So first of all, we know immediately that the energy cannot depend on U itself for the same reason as before. It can depend on derivatives. But this rotation and derivatives is a little bit strange. So I'm going to use a trick. If I do this in Fourier space, like I did over here, I went from this to k over 2 integral dk k squared U tilde of k squared. If I stick with sufficiently low derivatives, only at the level of the second order derivatives, if I have a second order form that depends on something like this, I can still go to Fourier space. And the answer will be of the form integral d dk. The different k modes will only get coupled from higher order terms, third order terms, et cetera. At the level of quadratic, I know that the answer is proportional to d dk. And for all of the reasons that we have been discussing so far, the answer is going to be U of k squared times some function of k, like k squared, k to the fourth. Now, whatever I put over here has to be invariant under rotations. So, let's see. I know that the answer that I write here should be quadratic [INAUDIBLE] tilde. It should be at least quadratic in k, because I'm looking at derivatives. In the same way that I had k here, I should have factors of k here. But k is a vector when I go to three dimensions. U becomes a vector when I go to three dimensions. So I want to construct something that involves quadratic in vector k, quadratic in vector U, and is also invariant under rotations. One thing that I know is that if I do a dot product of two vectors, that dot product is invariant on the rotations. So I have two vectors. So I know, therefore, that k squared, k dot k is a rotational invariant. The tilde of k squared is rotationally invariant. But also, k dot U tilde of k squared is rotationally invariant. OK? So what I can do is I can say that the most general form that I will write down will allow 2 terms. The coefficients of that are traditionally called mu. This is mu over 2. This one is called mu plus lambda over 2. Actually, I have to put an absolute value squared here. So that's the most general theory of elasticity in any number of dimensions that is consistent with this symmetry that I have here. And it turns out that this corresponds to elasticity of materials that are isotropic. And they are described by 2 elastic coefficients, mu and lambda, that are called [INAUDIBLE] coefficient. Mu is also related to shear modules. Actually, mu and lambda combined are related to [INAUDIBLE] And if I want, in fact, to fully transform this back to the space, in real space the can be written as mu over 2 mu alpha beta of x mu alpha beta over x, where the sum over alpha and beta takes place. Alpha and beta run from 1 to d. And the other term, lambda over 2, is U alpha alpha of x, U beta beta of x. And this object, U alpha beta, is one half of the symmeterized derivatives, du alpha by dx beta plus du beta by dx alpha. And it's called the strength AUDIENCE: Question. PROFESSOR: Yes? AUDIENCE: So are you still looking in the regime of low energy expectations? PROFESSOR: Yes. That's right. AUDIENCE: So wouldn't the discreteness of the allowed wave vectors become important? And if so, why are you integrating rather than discrete summon? PROFESSOR: OK. So let's go back to what we have over here. The discreteness is present over here. And what I am looking at for the discreteness, this spacing that I have between these objects is 2pi over L. The L is the size of the system. So if you like, what we are looking at here is the hierarchy of landscapes, where L is much larger than the typical wavelengths of these excitations that are set by the temperature, which is turn much larger than the lattice place. And so when we are talking about, say, a solid at around 100 degrees temperature or so, this-- then say, over here, it typically spans 10 to 100 atoms, where as the actual size of the system spans billions of atoms or more. And so the separations that are imposed by the discreteness of k are irrelevant to the considerations that we have. Yes? AUDIENCE: So before, with this adding a constant c, that corresponds to translating almost crystal by some vector. PROFESSOR: Right. AUDIENCE: For the rotation, is this a rotation of the crystal or is this a rotation of the displacement field? PROFESSOR: It's the rotation of the entire crystal. So you can see that essentially both x and U have to be rotated together. I didn't write it precisely enough. But when I wrote the invariant as being k dot U, the implicit thing was that the debate vector and the distortion are rotated. AUDIENCE: So does it require an isotropic crystal in that case? PROFESSOR: Yes. AUDIENCE: I would think if you're rotating everything together, who cares if one axis is different than another? Because if I have a non-isotropic crystal and I rotate it around, it shouldn't change the internal energy. PROFESSOR: OK. Where it will make difference is at higher order terms. And so then I have to think about the invariants that are possible at the level of higher order terms. But that's a good question. Let me come back and try answer that more carefully next time around. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 16_Series_Expansions_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So looking for a way to understand the universality of phase transitions, we arrived at the simplest model that should capture some of that. That was the Ising model, where at each site of a lattice-- and for a while I will be talking in this lecture about the square lattice-- you put a binary variable. Let's call it sigma i that takes two values, minus plus 1, on each of the N sites. So we have a total of 2 to the N possible configurations. And we subject that to an energy cost that tries to make nearest neighbors to be the same. So this symbol stands for sum over all pairs of nearest neighbors on whatever lattice you have. Here the square lattice. And the tendency for them to be parallel as opposed to anti-parallel is captured through this dimensionless energy divided by Kt parameter K. So then in principle, as we change K we could also potentially add a magnetic field. There could be a phase transition in the system. And that should be captured by looking at the partition function, which is obtained by summing over the 2 to the N possibilities of this weight that is e to the sum over ij K sigma i sigma j. So that's easier said than done. And the question was, well, how can we proceed with this? And last lecture, we suggested two routes for looking at this system. One of them was to start by looking at a low-temperature expansion. And here, let's say, we would start with one of two possible ground states. Let' say all of the spins could, for example, be pointing in the plus direction. That is certainly the largest contribution that you would have to the partition function at 0 temperature. And that contribution is all of the bonds being satisfied. Each one of them giving a factor of e to the K. Let's say we are on a square lattice. On a square lattice, each site has two bonds associated with it. So this will go like 2N. Since we have N sites, we will 2N bonds. There is actually, of course, two possibilities. We can have either all of them plus or all of them minus. So there is a kind of trivial degeneracy of 2, which doesn't really make too much of a difference. And then we can start looking at excitations around this. And so we said that the first type of excitation is somewhere on the lattice we make one of these pluses into a minus. And once we do that, we have made 4 bonds that go out of this minus site unhappy. So the cost of going from plus k to minus K, which is minus 2k, in this case is repeated four times. And this particular excitation can be placed in any one of N locations on the lattice. So this was this kind of excitation. And then we could go and have the next excitation where two of them are flipped. And we would have a situation such as this. And since this dimer can point in the x- or the y-direction on the square lattice, it has a degeneracy of 2. I have e to the minus 2K. And how many bonds have I broken? One, two, three, four, five, six. Times 6. And I can go on. So this was a situation such as this. A general term in the series would correspond to creating some kind of an island of minus in this sea of pluses. And the contribution would be e to the minus 2K times the perimeter of this island. So that would be a way to calculate the low-temperature expansion we discussed last time around. We also said that I could do a high temperature expansion. And for this, we use the trick. We said I can write the partition function as a sum over all these 2 to the N configuration. Yes. AUDIENCE: Is there a reason we don't have separate islands? PROFESSOR: Oh, we do. We do. So in general, in this picture I could have multiple islands. Yes. And what would be interesting is certainly when I take the log of Z, then the log of Z here I would have NK. I would have log 2. And then there would be a bunch of terms. And what we saw was those bunches of terms, starting with that one, can be captured into a series where the n comes out front and the terms in the series would be functions of e to the minus 2K. And indeed, when we exponentiate this, this would have single islands only. The exponential will have multiple islands that we have for the partition functions. So these terms, as we see, are higher powers of n because you would be able to move them in different directions. When you take the log, only terms that are linear and can survive. So this sum over bond configurations I can write as a product over bonds. And the factor of e to the K sigma i sigma j, we saw that we could capture slightly differently. So e to the K sigma i sigma j I can write as hyperbolic cosine of K 1 plus hyperbolic tanh of K sigma i sigma j. And this t is going to be my symbol for hyperbolic tanh of K, so that I don't have to repeat it all over the place. So this is just rewriting of that exponential, recognizing that it has two possibilities. We took the cosh K to the outside. And if I'm, again, on the square lattice, I have 2N bonds so there will be 2N factors of cosh K that I will take into the outside. And then I would have a series, which would be these terms that I can start expanding in powers of t. And the lowest order I have 1. And then we discussed what kinds of terms are allowed. We saw that if I take just one factor of t sigma i sigma j, then I have a sigma i sitting here and sigma j sitting here. I sum over the two possibilities of sigma being plus or minus. It will give me 0. So there is a contribution order of t if I expand that, but summing over sigma i and sigma j would give me 0. So the only choice that I have is that this sigma that is sitting out here I should square. And I can square it by putting a bond that corresponds to this sigma and that sigma. This became sigma squared. I don't have to worry about that. Then I can complete this so that that's squared and this so that that's squared. And so this was a diagram that contributed N this quantity t to the fourth. Well, what's the next type of graph that I could draw? I could do something like this. And that is, again, something that I can orient along the x-direction or along the y-direction. So there is a factor of 2 from the orientation. And that's how many factors of t. It is 6. And so in general, what do I have? I will have to draw some kind of a graph on the lattice where at each site, I either have 0, 2, or potentially 4. There is no difficult with 4 bonds emanating from a site because that sigma to the fourth. Basically, those kinds of things actually will give me then the sum over-- possibilities of sigma will give me 2 to the number of sites. And then these graphs, basically will get a factor which is 2 to the number of bonds in graph. And then again, you can see that here I could have multiple loops, just like we discussed over there. Multiple loops will go with larger factors of N. The thing that we are interested is log Z, which is N log 2 cosine squared K. And then I have to take the log of this expression, and I'll call it g of t. Yes. AUDIENCE: How did we get rid of [INAUDIBLE] bonds? PROFESSOR: OK. So this e to the K sigma i sigma j has two possible values. It is either e to the K or e to the minus K. And I can write it, therefore, as e to the K plus e to the minus K over 2 plus e to the K minus e to the minus K over 2 sigma i sigma j, right? AUDIENCE: My question was-- PROFESSOR: Yeah, I know. And then, this became cosh K 1 plus what I call sigma i sigma j. So when I draw my lattice, for the bond that goes between sites i and j, in the partition function I have this contribution. This contribution is completely equivalent to this. And this is two terms. The cosh K we took outside. The two terms is either 1 or 1 line. There isn't anything that is multiple lines. And as I said, you can make separately an expansion in powers of K. This corresponds to nothing, or going forward and backward, or going forward, backward, et cetera, if you make an expansion in powers of K. This term corresponds to going once. Essentially, this captures all of the things that stay back to the same site and this is a re-summation of everything that steps you forward. So everything that steps you forward and carries information from this sites to that site has been appropriately taken care of. And it occurs once. And that's one reason. And again, if you have three things coming at a particular site, it's a sigma i q. So these are completely the only thing that happen. But in principle, this is one diagrammatic series. This is one diagrammatic series. But you stare at them a little bit and you'll see why I put the same g for both of them. On the square lattice, they're identify series. See that series had N something to the fourth power, 2N something to the sixth power, N something to the fourth power, 2N something to the sixth power. The first two terms are identical. You can convince yourself that all the terms will be identical also, including something complicated, such as if I were to draw-- I don't know. A diagram such as this one, which has 2 or 4 per site. I can convert that to something that had spins plus out here, then it minus here, minus here, plus here. And it's a completely consistent diagram that I would have had in the low-temperature series. So you can convince yourself that there is a one-to-one correspondence between these two series. They are identical for the square lattice. As we will discuss, this is a property of the square lattice. So you have this very nice symmetry that you conclude that the partition function per site, the part that is interesting, either I can get it from here as K plus this function of e to the minus 2K or from the high temperatures series as log 2 hyperbolic cosine squared K. Plus exactly the same function of tanh K. This part of it, actually, I don't really care because these are analytical functions. I expect this model to have a phase transition. The low-temperature and the high-temperature behavior should be different. It should be order at low temperature, [INAUDIBLE] at high temperature. There should be a phase transition and a singularity between those two cases. The singular part must be captured here. So these are the singular functions. So the singular part of the free energy has this interesting property that you can evaluate it for some parameter K, which is large in the low-temperature phase, or some parameter t, which is small in the high-temperature phase. And the two will be related completely to each other. They are essentially the same thing. So this property is called duality. And so what I have said, first of all, is that there is a relationship between, say, the coupling tanh K and a coupling that I can separately call K tilde, such that if I evaluate the high-temperature series at K, it is like evaluating the low-temperature series at K tilde, where K tilde of K is minus 1/2 log of the hyperbolic tanh at K. I can plot for you what this function looks like. So this is K. This is what K tilde of K looks like. And it is something like this. Basically, strong coupling, or low temperature, gets mapped to weak coupling, or high temperature, and vice versa. So it's kind of like this, that there is this axes of the strength of K going from low temperature, strong, to high temperature, weak. And what I have shown is that if I start with somewhere out here, it is mapped to somewhere down here. If I start from somewhere here, it would be mapping to somewhere here. So one question to ask is, well, OK, I start from here. I go here. If I put that value of k in here, do I go to a third point or do I come back to here? And I will show you that it is, in fact, a mapping that goes both ways. And a way to show that is like this. Let me look at the hyperbolic sine of 2K. Hyperbolic sines of twice the angles, or twice the hyperbolic sines of K hyperbolic cosine of K. All my answers are in terms of tanh K, so I can make this sine to become a tanh by dividing by cosh, and then making this cosh squared. So that is 2 hyperbolic tanh of K. And then, there's various ways to sort of remember the identity for hyperbolic cosine squared minus sine squared is 1. If I divide by c squared, it becomes 1 minus t squared is 1 over c squared. So the hyperbolic cosine squared here is the inverse of 1 minus hyperbolic tanh squared of K. Now, for tanh K, we have this identity, is 2 e to the minus 2K tilde. 1 minus the square of that. If I multiply both sides by-- e to the numerator and denominator by e to the plus 2K tilde, hopefully you recognize this as 1 over hyperbolic sine of K tilde 2K tilde. So the identity here that was kind of not very transparent, if I had made the change of variables to the hyperbolic sine of twice the angle, had this simple form. So then the symmetry between K and K tilde is immediately obvious. You pick one value of K, or sine K, and then the inverse. And the inverse of the inverse, you are back to where you are. So it is clear that this is kind of like an x to y0 1 over x mapping. x to 1 over x mapping would also kind of look exactly like this. In fact, if instead of K tilde and K I had plotted hyperbolic sine versus hyperbolic sine of twice the angle, it would have been just the 1 over x curve. Now, just to give you another example, if I had the function f of x, which is x 1 plus x squared. This function, if I divide by x squared, becomes x inverse 1 plus x inverse squared. So this is, again, f of x inverse. So if I evaluate this function for any value like 5, then I know the value exactly for 1/5. If I evaluate it for 200, I know it for 1/200, and vice versa. Our g function is kind of like that. Now, this function you can see that starts to go linearly increases with x, and then eventually it comes down like this must have one maximum. Where is the maximum? It has to be 1. I don't have to take derivatives of everything, et cetera. If there is one point which corresponds to the maximum, it's the point that maps to itself. Now, this function for the Ising model, I know it has a phase transition. Or, I guess it has a phase transition. So there is one point, hopefully one point, at which it becomes singular. I don't know, maybe it's three point. But let's say it's one point at which it becomes singular. Then I should be able to figure its singularity by precisely the same argument. So the function that corresponds to x going to 1 over x is this hyperbolic sine. So if there is a point which is the unique point that corresponds to the singularity, it has to be the point that is self-dual-- maps on to itself, just like 1. So sine of 2 Kc should be 1. And what is this? Hyperbolic sine we can write as e to the 2 Kc minus e to the minus 2 Kc over 2. We can manipulate this equation slightly to e to the 4 Kc minus 2 e to the 2 Kc minus 1 equals to 0, which is a quadratic equation for e to the 2 Kc. So I can immediately solve for e to the 2 Kc is-- This has a 2, so I can say 1 minus plus square root of square of that plus this. So that is a square root of 2. The exponential better be positive, so I can't pick the negative solution. And so we know-- AUDIENCE: Isn't it [INAUDIBLE]? PROFESSOR: Where did I-- OK. Multiply by e to the 2 Kc. AUDIENCE: Taking that as correct, I didn't check it-- PROFESSOR: Yes. OK. x squared minus 2b plus-- x plus c equals. Well, actually, this is-- so x is b minus plus square root of b squared minus ac. Our c is negative. So it's 1 plus 1. So the critical coupling that we have is 1/2 log of 1 plus square root of 2. So we know that the critical point of the Ising model occurs at this value, which you can put on your calculator. And it is something like this. There is one assumption. Of course, there is essentially only one singularity. And there is one singularity in this free energy. But if it is, we have solved it for this case. So I think to emphasize this was discovered around '50s by Wannier, this idea of duality. And suddenly, you had exact solution for something like the square Ising model. Question is, how much information does it give you? First of all, the property of self-duality is that of the square lattice. So if I had done this on the triangular lattice, you would've seen that the low-temperature and high-temperature expansions don't match. It turns out that in order to construct the dual of any lattice, what you have to do is to put, let's say, points in the center of the units that you have and see what lattice these centers make. So when you try to do that for the triangular lattice, you will see that the centers form, actually, hexagonal lattice, and vice versa. However, there is a trick using duality that you can still calculate critical points of hexagonal and triangular lattice. And that you will do in one of the problem sets. OK, secondly. Again, that trick allows you to go beyond square lattice, but it turns out that for reasons that we will see shortly, it is limited. And you can only do these kinds of dualities to yourself for two-dimensional lattices. And what these kinds of mappings in general give for two-dimensional lattices is potentially, but not always, the critical value of Kc. And again, one of the things that you will see is that you can do this for other models in two dimension. For example, the Potts model we can calculate the critical point through this kind of procedure. However, it doesn't tell you anything about the nature of the singularity. So essentially, what we've shown is that on the K-axis, there is some point maybe that describes the singularity that you are going to have. But the shape of this singularity, the exponent can be anything. And this mapping does not tell you anything about that. It does tell you one thing. We also mentioned that the ratio of amplitudes above and below for various singular quantities is something that is universal because of these mappings from high temperatures to low temperatures. Although I don't know what the nature of the singularity is, I know that the amplitude ratio is [INAUDIBLE]. So there is some universal information that one gains beyond the non-universal location of the critical point, but not that much more. OK. Any questions? AUDIENCE: Is it possible to extract from this line a differential equation for g? PROFESSOR: Yes. And indeed, that differential relation you will use in one of the problem sets that I forgot to mention, and will be used to derive the value of the derivative, which is related to the energy of the system at the critical point. But you are right. This is such a beautiful thing that maybe we can try to force it to work in higher dimensions. So let's see if we were to try to go with this approach for the 3D Ising model what would happen. So what did we do? We wrote the low-temperature series, high-temperature series and compared them. Again, let's do the cubic lattice, which I will not really attempt to draw. That's the system that we want to calculate. So let's do the low T-series. Our partition function is going to start with the state where every spin is, let's say, up. Three bonds per site on the cubic lattice. So it's 3 NK. Again, the trivial degeneracy of 2 for the two possible all plus or all minus states. The first excitation is to flip a spin. So any one of N sites could have been flipped, creating a cube. A cube has 6 faces that go out, so there is essentially 6 bonds that are broken. So basically, there is this minus that is in a box surrounded by plus. And as you can see, 6 plus minus 1 that go out of that. The next one would be when we have 2 minuses. And that can be oriented three ways in three dimensions. e to the minus 2K. 1, 2 times 4. 8 plus 2. Times 10. And so the general term in the series I have to draw some droplet of minuses in a sea of pluses. And then I would have e to the minus 2K times the boundary or the area of this droplet. Actually, droplets because there could be multiple droplets as we've seen. There's no problem with that. If I do the high T, follow exactly the procedure I had described before, partition function is going to be 2 to the number of sites. Cosh K to the power of the number of bonds and there are 3 bonds per site. So there's 3N there. And then we start to draw our diagrams. The first diagram is just exactly like what we had before. I have to make a square. And this square can be placed on any face of the cube. And there are 3 faces that are equivalent. The next type of diagram that I can draw has 6 bonds in it. So this could be an example of that. And if you do the counting, there are 18N of those. And so you go. And the genetic term in the series is going to be some find of a loop. Again, even number of bonds per site is the operative term. And then I have t to the power of the number of bonds making this closed loop. Or loops. So you stare at the series and you see immediately that there is no correspondence like we saw before. The coefficient here are 1N, 3N. Here, there are 1, 3, and 18N. Powers are 6, 10. Here are 4, 6. There's no correspondence between these two. So there is nothing that one could say. But you say, I really like this. So maybe I'll phrase the question differently. Can I consider some other model whose high-temperature expansion reproduces this low-temperature expansion of the Ising model? So this is the question, can we find a model whose high T expansion reproduces low T of 3D Ising model? So rather than knowing what the model is, now we are going to kind of work backward from this graphical picture that we have. So what would have been the analog thing over here, let's say that I had this picture of droplets in the 2D Ising model. I recognize that I need to make these perimeters out of something. And I know that I can make these things that are joined together to a procedure such as the one that we have over here. But the unit thing, there it was the elements that I had along the perimeter. What is the corresponding unit that I have for the low temperature series of the Ising model? I have e to the minus 2K to the power of the number of faces. So first thing is unit has to be a face. So basically, what I need to do is to have a series, which is an expansion in terms of faces, and then somehow I can glue these faces together, like I glued these bonds together. So we found our unit. The next thing that we need is some kind of a glue to put all of these LEGO faces together. So how did we join things together here? We had these sigmas that were sitting by themselves. And then putting two sigmas together, I ensured that when I summed over sigma, I had to glue two of the T's together. Can I do the same thing over here? If I put the sigmas on the corners of these faces, you can see it doesn't work because here I have three. So I'm forced to put the sigmas on the lines that join the faces. So what I need to do is, therefore, to have a variable such as this where I have these sigmas sitting on the-- let's call this a plaquette, p. And this plaquette will be having around it four different bonds. And if I have the product of these four bonds-- again, these sigmas being minus plus 1-- I am forced to glue these sigmas in pairs. And I can join these things together, these squares, to make whatever shape that I like that would correspond to the shapes that I have over there. So what I need to do is to have for each face a factor of this. So this is the analog of this factor that I have over here. And then what I need to do is to do a product over all plaquettes. And I sum over all sigma tildes. And this would be the partition function of some other system. In this other system, you can see that if I make its expansion, there will be a one-to-one correspondence between the terms in the expansion of this partition function and the 3D Ising model partition function. Again, this kind of term we have seen. If I had put factors of cosh here, which don't really do much, I can re-express as e to the something-- k tilde sigma 1p sigma 2p sigma 3p sigma 4p. Essentially every time you see 1 plus t times some binary variable, you can rewrite it into this fashion. So what we have come up with is the following. That in order to construct the dual of the three-dimensional Ising model, what we do is you go all over your cube. On each bond of it, you put a variable that is minus plus 1. So previously for the Ising model, the variables were sitting on the sites. So Ising, these were site variables. Whereas, this dual Ising, these are the bond variables that are minus plus 1. In the case of the Ising, the interactions were the product of sites making on a bond. Whereas, for the dual Ising, the interactions are around the face. There's four of them that go around the face. But whatever this new theory is, we know that its free energy because of this relation is related to the free energy of the three-dimensional Ising model. Also, we know that the three-dimensional Ising model has a phase transition between a disordered phase and the magnetized phase at low temperature. There is a singularity. As I span the parameter K of the three-dimensional Ising model, there is a Kc. Now, I can find out what that Kc is because I don't have self-duality. But I know that as I span the parameter K of the Ising model, I'm also spanning the parameter K tilde of this new theory. And since the original model has a phase transition, this new model must also have a phase transition. So there exists a Kc for both models. You say, OK. Fine. But there is some complicated kind of Ising model that you have devised and it has a phase transition. What's the big deal? Well, the big deal is that this model is not supposed to have a phase transition because it has a different type of symmetry. The symmetry that we have for the Ising model is a global symmetry. That is, the energy of a particular state is the energy of the state in which all of the spins are reversed. Because the form of the energy is bilinear. If I take all of the sigmas from one configuration and make them minus in that configuration, the energy will not change. But I have to do that globally. It's a global symmetry. Now, this model has a local symmetry because what I can do is I can pick one of the sites. And out of this site, there are six off these bonds that are going out on which there is one of these sigma tilde. If I pick this site and I change the sign of all of these six that emanate from this site, the energy will not change. Because the energy gets contributions from faces. And you can see that for any one of the faces, there are two sigmas that have changed. So the energy, which is the product of all four of them, has not changed. So this model has a different form, which is a local symmetry. And in fact, it is very much related to gauge theories. It's a kind of discrete version of the gauge theories that you have seen in electromagnetism. Since there are two possibilities, it's sometimes called a Z2 gauge theory. Now, the thing about the gauge theories is that there is a theorem which states that local or these gauge theories, gauge symmetries, cannot be spontaneously broken. So for the case of the Ising model, we have this symmetry between sigma going to minus sigma. But yet, we know that if I go to low temperature, I will have a state in which globally all of the spins are either plus or minus. So there is a symmetry broken state which is what we have been discussing. Now, the reason that that cannot take place in these gauge theories, I will just sketch what is happening. Essentially, we have been thinking in terms of this broken symmetries by putting an infinitesimal magnetic field. And we saw that, basically, if I'm at temperatures of 1 over K's that are below some critical value, then if h is plus, everybody would be plus. If h is minus, everybody would be minus. And the reason as you approach h goes to 0 from one site that you don't get average of 0 is because the difference between the energy of this state and that state as you go to 0 temperature is proportional to N times h. So although h is going to 0 with N being very large, the influence of infinitesimal h is magnified enormously. Now, for the case of these local gauge theories, you cannot have a similar argument. Because if I pick this spin, let's say one of these bond spins. And let's say is its average-- what is its average? As I said, the h going to 0. Well, the difference between a state in which it is plus or the state in which it is minus is, in fact, 6h. Because all I need to do is to pick a site that is close to that bond and flip all of the spins that are close to that. All of the K's are equally satisfied. The difference between that state and the one where there is a flip is just 6h. So that remains finite as h goes to 0. There is no barrier towards flipping those spins. So there is no broken symmetry in this system. So this can be proven very nicely and rigorously. So we have now two statements about this Ising version of a gauge theory. First of all, we know that at low temperatures, still the average value of each bond is equally likely to be plus or minus. From that perspective of local values of these bond spins, it is as disordered as the highest-temperature phase. Yet, because of its duality to the three-dimensional Ising model, we know that it undergoes some kind of a singularity going from high temperatures to low temperatures. So there is probably some kind of a phase transition, but it has to be very different from any of the phase transitions that we have discussed so far because there is no spontaneous symmetry breaking. So what's going on? Now, later on in the course, we will see another example of this that is much less exotic than a gauge theory, but it has the same kind of principle applicable to it. There will be a phase transition without local symmetry breaking in something like a superfluid in two dimensions. So one thing that that phase transition and this one have in common is, again, the lack of this local-order parameter from symmetry breaking. And both of them share something that was pointed out by Wegner once this puzzle emerged, which is that one has to look at some kind of a global-- well, I shouldn't even call it global. It is something that is called a Wilson loop. So the idea is the following. That we have these variable sigma tilde that are sitting on the bonds of a lattice. Now, the problem is that with this local transformations, I can very easily make this sigma go to minus sigma. So that is not a good thing to consider. However, what was the problem? Let's say I pick this site. All of the sigmas that went out of that site I changed to minus themselves. That was the gauge transformation, but it became minus. But if I multiply the sigma with another sigma that goes out of that site, then I have cured that problem. If I changed this to minus itself, this changes, this changes. The product remains [INAUDIBLE]. But then I have the problem here. So what I do is I make a long loop. I look at the expectation value. So this Wilson loop is the product of sigma tilde around a loop. And what I can do is I can look at the average of that quantity. The average of that quantity is something that is clearly invariant to this kind of gauge transformation. So the signatures of a potential phase transition could potentially be revealed by looking at something like this. But clearly, that is a quantity that is always also going to be positive. So the thing that I am looking at is not that this quantity is, say, positive in one phase and 0 in the other phase. It is on how this quantity depends on the shape and characteristics of these loop. So what I can do is I can calculate this average, both in high temperatures and low temperatures, and compare them. So if I look at-- let's starts with, yeah, high temperatures. The high-temperature expansion. So I want to calculate the expectation value of the product of sigma tilde, let's call it i, where i belongs to some kind of a loop c. So c is all of these bonds. I want to calculate that expectation value. How do I calculate that expectation value? Well, I have to sum over all configurations of this product with a weight. What is my weight? My weight is this factor of product over all plaquettes of 1 plus t sigma tilde sigma tilde sigma tilde for the plaquette. So this is the weight that I have. I can put the hyperbolic cosine or not put it, it doesn't matter. But then this weight has to be properly normalized. It means that I have to divide by something in the denominator, which basically does not include the quantity that I am averaging. So the graphs that occur in the denominator are the things that we have been discussing. Essentially, I start with 1. The next term is to put these faces together to make a cube, and then more complicated shapes. That, essentially, every bond is going to be having some kind of a complement. Well, but what about the terms in the numerator? For the terms in the numerator, I have these factors of sigma tilde that are lying all around this loop. And I'm summing over the two possible values. So in order that summing over this does not give me 0, I better make sure that there is a complement to that. The complement to that can only come from here. So for example, I would put a face over here that ensures that that is squared and that is squared. But then I have this one. Then I will put another one here. I will put another one, et cetera. And you can see that the lowest-order term that I would get is this factor that characterize each one of them. Raised to the power of the area of this loop c. You can ask higher-order terms. You can kind of build a hat on top of this. This you can put anywhere, again, along this area of this thing. So the next correction in this series you can see is also going to be something that will be 1 plus t to the fourth times the area. The point is that as you add more and more terms, you preserve the structure that the whole thing is going to be proportional to the area of loop times some function of this parameter t. So what we know is that if I take the expectation value of this entity, then it's logarithm will be proportional in high temperatures to the area of this loop. Now, what happens if I try to do a low T-series for the same quantity? So I have to start with a configuration at low temperature that minimizes the energy. One configuration clearly is one where all of the sigmas are plus. And that will give me a term. If I am calculating the partition function in the denominator, there will be a term that will be proportional to e to whatever this K tilde is per face. And there are three faces of a cube. There are N cubed, so there will be 3K tilde N for the configuration that is all plus. But this is not the only low-temperature configuration. That is what we were discussing. Because I can pick a site out of N site and the 6 bonds that go out of it, I can make minus themselves. And the energy would be exactly the same. So whereas the Ising model, I had a multiplicity of 2. Here, there is a multiplicity of 2 to the N. So that's the lowest term in the low-temperature expansion of the partition function. I'm doing the partition function, which is the denominator first. Then, what can I do? Then, let's say I start with a configuration where all of them are pluses. There are, of course, 2 to the N gauge copies of that. So whatever I do to this configuration, I can do the analog in all of the others. But let's keep the copy where all of the sigmas are plus, and then I flip one of the sigmas to minus. Then, it's essentially-- think of a cube. There is a line that was plus and I made it minus. There are four faces going out of that that were previously plus K, now become minus K. So I will have 2K tilde times 4 because of these four things that are going out. And the bond I can orient in x-, y-, or z-direction. So there are 3N possibilities. And so I could have a series such as this in the denominator where subsequent terms would be to put more and more minus in this particular [INAUDIBLE]. Now, let's see how these series would affect the sum that I would have to do in order to calculate this expectation value. For any one of these configurations, since I am kind of looking at the ground state, let's say they're all pluses. Clearly, the contribution to this product will be unity. That does not change. But now, let's think about the configurations in which one of the bonds is made to flip. As long as that bond does not touch any of the bonds that are part of the loop, the value of the loop will remain the same. So let's say that the loop has P sites. So this is number of bonds in c. Let's call it Pc. For these, with this weight, the value of Wilson loop would still be plus. But for the times where I have picked one of these to become minus, then the product becomes minus. So for the remaining Pc times of this factor, e to the minus 2K tilde times 4, rather than having plus I will have minus. So you can see that-- what is this? N should be up here. That the difference between what is in the numerator and what is in the denominator of this low-temperature series has to do with the bonds that have been sitting as part of the Wilson loop. And if I imagine that this is a small quantity and write these as exponentials, you can see that this is going to start with e to the minus 2K tilde times 4 times the perimeter of this cluster. Of this loop. And you can go and look at higher and higher order terms. The point is that in high temperature, the property of the shape of the loop that determines this expectation value is its area. Whereas, in the low-temperature expansion, it is its perimeter. So you could, for example, calculate the-- for a large loop, the log of this quantity and divide it by the perimeter. And in the low-temperature phase, it would be finite. In the high-temperature phase, it would go to 0 because the area scale is bigger than the perimeter. So we have found something that is an analog of an older parameter and can distinguish the different phases, and it is reflected in the way that the correlations take place. Let's try to sort of think about some potential physics that could be related to this. Let's start with the gauge theory aspect of this. Well, the one gauge theory that you probably know is quantum electrodynamics, whose action you would write in the following way. The action would involve an integration over space as well as time. And by appropriate rescalings, you can write the energy that is in the electromagnetic field as d mu A mu minus d mu A mu squared, where A is the 4-vector potential out of which you can construct the electric field and magnetic field. And the reason this is a gauge theory is because if I take A and add to it some function of phi of x and t, as long as I take the derivative of this function, you can see that the change here would be d mu d mu minus d mu by d mu. There is really no change. And we know that basically you can choose whatever value of this phase-- this gauge fixing potential over here. Now, this is the electromagnetic field by itself. If you want to couple it to something like matter or electrons, you write something like i d bar, which is some derivative, and then you have e A bar. If there's a mass to this object, you would put it here. This would be something like psi bar psi. This would be something that would describe the coupling of this electromagnetic field to some charged particle, such as the electron. And this entire thing satisfies the gauge symmetry provided that once you do this, you also replace psi with e to the ie phi psi. So the same phi, if it appears in both, essentially the change in A that you would have from here will be compensated by the change that you would get from the derivative acting on this phase factor. And so the whole thing is not affected. What we have constructed in this model is kind of an Ising analog of this theory. Because the Hamiltonian that we have, which carries the weight after exponentiate of the different configurations, has a part which is the sum over all of the plaquettes of this sigma tilde sigma tilde sigma tilde sigma tilde, the four bonds around the plaquette. We could put some kind of a coupling here if we want to. And the analog of this transformation that we have-- well, maybe it will become more apparent if I add the next term, which is the analog of the coupling to matter. If I put a spin and then sigma tilde ij sj. So again, imagine that we have our cubic lattice, or some other lattice, in which we have these variables sigma tilde that are sitting on the bonds. And the first term is the product around the face. And the second term I put these variables s that are sitting here. And I have made a coupling between these two s's. So if the sigma tildes were not there, I could make an Ising model with s's being plus or minus, which are coupled across nearest neighbors. What I do is that the strength of that coupling I make to be plus or minus, depending on the value of the gauge field, if you like. This Ising gauge field that is sitting over there. Now, the analog of these symmetries that we have for QED is as follows. I can pick a particular s i and change its value to minus. And I can pick all of the sigma tildes that go out of that i to the neighbours and simultaneously make them minus. And then this energy would not change. First of all, let's say if I pick this site and change its face from being plus or minus, then the bonds that-- the sigma tildes that go out of it will change their values to minus themselves. Since each one of the faces contains two of them, the value of the energy from here is not changed. Since the couplings to the neighboring s's involve the sigma tildes that sit between them, and I have flipped both s and sigma tilde, those do not change irrespective of what I do with the face of all the other ones. So I have made an Ising, or binary version, of this transformation, constructed a model that has, except being Ising symmetry, a lot of the properties that you would have for this kind of action. Now again, continuing with that, this difference between why we see an area rule or a perimeter rule has some physical consequence that is worth mentioning. And this, again, has not much to do with the main thrust of this course, but just a matter of overall education. It is useful to know. So in this picture, where one of the dimensions corresponds to time, imagine that you create a kind of a Wilson loop which is very long in one direction that I want to think of as being time. And the analog of this action that we have discussed is to create a pair of charges, separate them by a distance x, propagate them for a long time t, and then [INAUDIBLE]. And ask, what is the contribution of a configuration such as this to their action on average? And so you would say that if particles are at a distance x, they are subject to some kind of a potential v of x. And if this potential has been propagated in time for a length or a duration that I will call T, that the effect that it has on the system is to have an interaction such as this in the action propagated over a time T. So I should have something like this. So this should somehow be related to this average of the Wilson loop in manner to not be made precise, but very rough just to get the general idea. And so what we have said is that the value of this Wilson loop has different behaviors at high T and low T in its dependence on shape. And that high temperatures, it is proportional to the area. So it should be proportional to xT. Whereas, in low temperature, it is proportional to perimeter. So it should be proportional to x plus T. So if I read off the form of V of x from these two dependents, you will see that V of x goes proportionately to x in one regime. And once I divide by T, the leading coefficients-- so this is essentially I want to look at the limit where T is becoming very large. So this thing when I divide by T goes to a constant. And I can very roughly interpret this as the interaction between particles that are separated by x via this kind of theory. And I see that for particles that are separated by x in that kind of theory, there is a weak coupling-- high temperature corresponds to weak coupling-- where the further apart I go, the potential that is bringing them together becomes linearly stronger. So this is what is called confinement. Whereas, in the other limit of low temperatures, or strong coupling, what you find is that the interaction between them essentially goes to a constant. The potential goes to a constant, so the force would go to 0. So they are asymptotically free. So if I start with this kind of theory and try to interpret it in the language of quantum field theory as something that describes interaction between particles, I find that it has potentially two phases. One phase in which the particles of the theory are strongly bind together, like quarks that are inside the nucleus. And you can try to separate the quarks, but they would snap back. You can't have free quarks. And then there is another phase where essentially the particles don't see each other. And indeed, quarks right inside the nucleus are essentially free. We can sort of regard them as free particles. So this theory actually has aspects of what is known as confinement and asymptotic freedom within quantum chromodynamics. The difference is that in this theory, there is a phase transition and the two behaviors are separated from each other. Whereas in QCD, it's essentially a crossover from one behavior to another behavior without the phase transition. So we started by thinking about these Ising models. And we kind of branched into theories that describe loops, theories that describe droplets, theories that describe gauge couplings, et cetera. So you can see that that nice, simple line of the partition function that I have written for you has within it a lot of interesting complexity. We kind of went off the direction that we wanted to go with phase transitions, so we will remedy that next time, coming back to thinking about how to think in terms of the Ising model, and try to do more with understanding the behavior and singularities of this partition function. |
MIT_8334_Statistical_Mechanics_II_Spring_2014 | 21_Continuous_Spins_at_Low_Temperatures_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So last time, we started looking at the system of [? spins. ?] So there was a field S of x on the lattice. And the energy cost was proportional to differences of space on two neighboring sites, which if we go to the continuum, became something like gradient of the vector S. You have to integrate this, of course, over all space. We gave this a rate of K over 2. There was some energy costs [INAUDIBLE] of this [? form ?] so that the particular configuration was weighted by this factor. And to calculate the partition function, we had to integrate over all configurations of this over this field S. And the constraint that we had was that this was a unit vector so that this was an n component field whose magnitude was 1, OK? So this is what we want to calculate. Again, whenever we are writing an expression such as this, thinking that we started with some average system [INAUDIBLE] some kind of a coarse graining. There is a short distance [INAUDIBLE] [? replacing ?] all of these tiers. Now what we can do is imagine that this vector S in its ground state, let's say, is pointing in some particular direction throughout the system and that fluctuations around this ground state in the transverse direction are characterized by some vector of pi that is n minus 1 dimensional. And so this partition function can be written entirely [? rather ?] than fluctuations of the unit vector S in terms of the fluctuations of these transverse [? coordinates ?] [INAUDIBLE] pi. And we saw that the appropriate weight for this n minus 1 component vector of pi has within it a factor of something like square root of 1 minus pi squared. There's an overall factor of 2, but it doesn't matter. And essentially, this says that because you have a unit vector, this pi cannot get too big. You have to pay [? a cost ?] here, certainly not larger than 1. And then the expression for the energy costs can be written in terms of two parts [? where it ?] is the gradient of this vector pi. But then there's also the gradient in the other direction which has magnitude square root of 1 minus pi squared. So we have this gradient squared. And so basically, these are two ways of writing the same thing, OK? So we looked at this and we said that once we include all of these terms, what we have here is a non-linear [? activity ?] that includes, for example, interactions among the various modes. And one particular leading order term is if we expand this square root of 1 minus pi squared, if it would be something like pi square root of pi, so a particular term in this expansion as the from pi [? grad ?] pi multiplied with pi [? grad ?] pi. [INAUDIBLE], OK? So a particular way of dealing these kinds of theories is to regard all of these things as interactions and perturbations with respect to a Gaussian weight which we can compute easily. And then you can either do that perturbation straightforwardly or from the beginning to a perturbative [? origin, ?] which is the route that we chose. And this amount to changing the short distance cut off that we have here that is a to be b times a and averaging over all [? nodes ?] within that distance short wavelength between a and ba. And once we do that, we arrive at a new interaction. So the first step is to do a coarse graining between the range a and ba. But then steps two and three amount to a rescaling in position space so that the cut-off comes back to ba and the corresponding thing in the spin space so that we start with a partition function that describes unit vectors. And after this transformation, we end up with a new partition function that also describes unit vectors so that after all of these three procedures, we hope that we are back to exactly the form that we had at the beginning with the same cut-off, with the same unit vector constraint, but potentially with a new interaction parameter K And calculating what this new K is after we've scaled by a factor of b, the parts that correspond to 2 and 3 are immediately obvious. Because whenever I see x, I have to replace it wit bx prime. And so from integration, I then get a factor of b to the d from the two gradients, I would get a factor of minus 2. So the step that corresponds to this is trivial. The step that corresponds to replacing S with zeta S prime is also trivial. And it will give you zeta squared. Or do I have yet to tell you what is? We'll do that shortly. And finally, the first step, which was the coarse graining, we found that what it did was that it replaced K by a strongly-- sorry. I didn't expect this. All right-- the factor K, which is larger by a certain amount. And the mathematical justification that I gave for this is we look at this expression, and we see that in this expression, each one of these pi's can be a long wavelength fluctuation or a short wavelength fluctuation. Among the many possibilities is when these 2 pi's that are sitting out front correspond to the short wavelength fluctuations. These correspond to the long wavelength fluctuations. And you can see that averaging over these two will generate an interaction that looks like gradient of pi lesser squared. And that will change the coefficient over here by an amount that is clearly proportional to the average of pi greater squared. And that we can see in Fourier space is simply 1 over Kq squared over KK squared for modes that have wave number K. So if I, rather than write this in real space, I write it in Fourier space, this is what I would get for the average. And in real space, I have to integrate over this K appropriately [INAUDIBLE] within the wave numbers lambda over [? real ?] [? light. ?] And this is clearly something that is inversely proportional to K. And the result of this integration of 1 over K squared-- we simply gave a [? 9 ?], which was i sub d of b. Because it depends on the dimension. It depends on [? et ?] [? cetera. ?] AUDIENCE: Sir? PROFESSOR: Yes? AUDIENCE: Shouldn't there be an exponential inside the integral? PROFESSOR: Why should there be an exponential inside the integral? AUDIENCE: Oh, I thought we were Fourier transforming. PROFESSOR: OK, it is true, when we Fourier transform for each pi, we will have a factor of e to the IK. If we have 2 pi's, I will get e to the IK e to the IK prime. But the averaging [INAUDIBLE] set K and K prime can be opposite each other. So the exponentials disappear. So always remember the integral of any field squared in real space is the same thing as the integral of that field squared in Fourier space. This is one of the first theorems of Fourier transformation. OK, so this is a correction that goes like [INAUDIBLE]. And last time, to give you a kind of visual demonstration of what this factor is, I said that it is similar, but by no means identical to something like this, which is that a mode by itself has very low energy. But because we have coupling among different modes, here for the Goldstone modes of the surface, but here for the Goldstone modes of the spin, the presence of a certain amount of short range fluctuations will stiffen the modes that you have for longer wavelengths. Now, I'm not saying that these two problems are mathematically identical. All I'm showing you is that the coupling between the short and long wavelength modes can lead to a stiffening of the modes over long distances because they have to fight off the [? rails ?] that have been established by shorter wavelength modes. You have to try to undo them. And that's an additional cost, OK? Now, that stiffening over here is opposed by a factor of zeta over here. Essentially, we said that we have to ensure that what we are seeing after the three steps of RG is a description of a [? TOD ?] that has the same short distance cut-off and the same length so that the two partition functions can map on to each other. And again, another visual demonstration is that you can decompose the spin over here to a superposition of short and long wavelength modes. And we are averaging over these short wavelength modes. And because of that, we will see that the effective length once that averaging has been performed has been reduced. It has been reduced because I will write this as 1 minus pi squared over 2 to the lowest order. And pi squared has n minus 1 components. So this is n minus 1 over 2. And then I have to integrate over all of the modes pi alpha of K in this range. So I'm performing exactly the same integral as above. So the reduction is precisely the same integral as above, OK? So the three steps of RG performed for this model to lowest order in this inverse K, our [? temperature-like ?] variable is given by this one [INAUDIBLE] once I substitute the value of zeta over there. So you can see that the answer K prime of b is going to b to the b minus 2 zeta squared-- ah, that's right. For zeta squared, essentially the square of that, 1 minus n minus 1 over K, the 2 disappears once I squared it. Id of b divided by K-- that comes from zeta squared. And from here I can get the plus Id of be over K. And the whole thing gets multiplied by this K. And there is still terms at the order of temperature of 1 over K squared, OK? And finally, we are going to do the same choice that we were doing for our epsilon expansion. That is, choose a rescaling factor that is just slightly larger than 1. Yes? AUDIENCE: Sir, you're at n minus 1 over K times Id over K. PROFESSOR: Yeah, I gave this too much. Thank you. OK, thank You. And we will write K prime at Kb to be K plus delta l dK by dl. And we note that for calculating this Id of b, when b goes to 1 plus delta l, all I need to do is to evaluate this integrand essentially on the shell. So what I will get is lambda to the power of d minus 2. The surface area of a unit sphere divided by 2 pi to the d, which is the combination that we have been calling K sub d, OK? So once I do that, I will get that the dK by dl, OK? What do I get? I will get a d minus 2 here times K. And then I will get these two factors. There's n minus 1. And then there's 1 here. So that becomes n minus 2. I have a Idb, which is Kd lambda to the d minus 2. And then the 1 over K and K disappear. And so that's the expression that we have. Yes? AUDIENCE: Sorry, is the Kd [? some ?] angle factor again? PROFESSOR: OK, so you have to do this integration, which is written as the surface area inside an angle, K to the d minus 1 dK divided by 2 pi to the d. This is the combination that we have always called K sub d. AUDIENCE: OK. PROFESSOR: OK? Now, it actually makes more sense since we are making a low temperature expansion to define a T that is simply 1 over K. Its again, dimensionless. And then clearly dT by dl is going to be minus K squared dK by dl minus 1 over K squared. Minus 1 over K squared becomes minus T squared dK by dl. So I just have to multiply the expression that I have up here with minus T squared, recognizing that TK is 1. So I end up with the [? recursion ?] convention for T, which is minus d minus 2T. And then it becomes plus n minus 2 Kd lambda to the d minus 2 T squared. And presumably, there are high order terms that we have not bothered to calculate. So this is the [INAUDIBLE] we focused on. OK? So let's see whether this expression makes sense. So if I'm looking at dimensions that are less than 2, then the linear term in the expression is positive, which means that if I'm looking at the temperature axis and this is 0, and I start with a value that is slightly positive, because of this term, it will be pushed larger and larger values. So you think that you have a system at very low temperatures. You look at it at larger and larger scales, and you find that it becomes effectively something that has higher temperature and becomes more and more disordered. So basically, this is a manifestation of something that we had said before, Mermin-Wagner theorem, which is no [? long ?] range order in d less than 2, OK? Now, if I go to the other limit, d greater than 2, then something interesting happens, in that the linear term is negative. So if I start with a sufficiently small temperature or a large enough coupling, it will get stronger as we go towards an ordered phase, whereas the quadratic term for n greater than 2 has the opposite sign-- this is n greater than 2-- and pushes me towards disorder, which means that there should be a fixed point that separates the two behaviors. Any temperature lower than this will give me an ordered phase. Any temperature higher than this will give me a disordered phase. And suddenly, we see that we have potentially a way of figuring out what the phase transition is because this T star is a location that we can perturbatively access. Because we set this to 0, and we find that T star is equal to d minus 2 divided by n minus 2 Kd lambda to the d minus 2. So now in order to have a theory that makes sense in the sense of the perturbation that we have carried out, we have to make sure that this is small. So we can do that by assuming that this quantity d minus 2 is a small quantity in making an expansion in d minus 2, OK? So in particular, T star itself we expect to be related to transition temperature, not something that is universal. But exponents are universal. So what we do is we look at d by dl of delta T. Delta T is, let's say, T minus T star in one direction or the other. [INAUDIBLE] And for that, what I need to do it to linearize this expression. So I will get a minus epsilon from her. And from here, I will get 2 n minus 2 Kd lambda to the d minus 2 T star times delta T. I just took the derivative, evaluated the T star. And we can see that this combination is precisely the combination that I have to solve for T star. So this really becomes another factor of epsilon. I have minus epsilon plus [? 2 ?] epsilon. So this is epsilon delta T. So that tells me that my thermal eigenvalue is epsilon, a disorder clearly independent of n. Now, we've seen that in order to fully characterize the exponent, including things like magnetization, et cetera, it makes sense to also put a magnetic field direction and figure out how rapidly you go along the magnetic field direction. So for that, one way of doing this is to go and add to this term, which is h integral S of x. And you can see very easily that under these steps of the transformation, essentially the only thing that happens is that I will get h prime at scale d is h from the integration. I will get a factor of b to the d. From the replacement of s with s prime, I will get a factor of zeta. So this combination is simply my yh. And just bringing a little bit of manipulation will tell you that yh is d minus the part that comes from zeta, which is n minus 1 over 2 Id of b, which is lambda to the d minus 2 Kd. And then we have T star. And again, you substitute for lambda to the d minus 2 Kd T star on what we have over here. And you get this to be d minus n minus 1 over 2n minus 2 epsilon. And again, to be consistent to order of epsilon, this d you will have to replace with 2 plus epsilon. And a little bit of manipulation will give you yh, which is 2 minus n minus 3 divided by 2n minus 2 epsilon. I did this calculation of the two exponents rather rapidly. The reason for that is they are not particularly useful. That is, whereas we saw that coming from four dimensions the epsilon expansion was very useful to give us corrections to the [? mean ?] field values of 1/2, for example, for mu to order of 10% or so already by setting epsilon equals to 1. If I, for example, put here epsilon equal to 1 to [? access ?] 3 dimensions, I will conclude that mu, which is the inverse of yT is 1 in 3 dimensions independent of n. And let's say for super fluid it is closer to 2/3. And so essentially, this expansion in some sense is much further away from 3 dimensions than the 4 minus epsilon coming from 4 dimensions, although numerically, we would have said epsilon equaled 1 to both of them. So nobody really has taken much advantage of this 2 plus epsilon expansion. So why is it useful? The reason that this is useful is the third case that I have not explained, yet, which is, what happens if you sit exactly in 2 dimensions, OK? So if we sit exactly in 2 dimensions, this first term disappears. And you can see that the behavioral is determined by the second order and [? then ?] depends on the value of n. So if I look at n, let's say, that is less than 2, then what I will see is that along the temperature axis, the quadratic term-- the linear term is absent-- the quadratic term, let's say, for n equals 1 is negative. And you're being pushed quadratically very slowly towards 0. The one example that we know is indeed n equals 1, the Ising model. And we know that the Ising model in 2 dimensions has an ordered phase. It shouldn't really even be described by this because there are no Goldstone modes. But n greater than 2, like n equals 3-- the Heisenberg model, is interesting. And what we see is that here the second order term is positive. And it is pushing you towards high temperatures. So you can see a disordered behavior. And what this calculation tells you that is useful that you wouldn't have known otherwise is, what is the correlation [? length? ?] Because the recursion relation is now dT by dl is, let's say, n minus 2. And my d equals to 2. The lambda to the d minus 2, I can ignore. Kd is 2 pi from the [? solid ?] angle divided by 2 pi squared. So that's 1 over 2 pi times T squared, OK? So that's the recursion relation that we are dealing with. I can divide by 1 over T squared. And then this becomes d by dl of minus 1 over T equals n minus 2 divided by 2 pi. I can integrate this from, say, some initial value minus 1 over some initial temperature to some temperature where I'm at [? length ?] [? scale ?] l. What I would have on the right hand side would be n minus 2 over [? 2yl ?], OK? So I start very, very close to the origin T equals 0. So I have a very strong coupling at the beginning. So this factor is huge. T is [? more than ?] 1 over T is huge. I have a huge coupling. And then I rescale to a point where the coupling has become weak, let's say some number of order of 1, order of 1 or order of 0. In any case, it is overwhelmingly smaller than this. How far did I have to go? I had to rescale by a factor of l that is related to the temperature that I started with by this factor, except that I forgot the minus that I had in front of the whole thing. So the resulting l will be large and positive. And the correlation length-- the length scale at which we arrived at the coupling, which is of the order of 1 or 0, is whatever my initial length scale was times this factor that I have rescaled by, b, which is e to the l. And so this is a exponential of n minus 1 over 2 pi times 1 over T. The statement is that if you're having 2 dimensions, a system of, let's say, 3 component spins-- and that is something that has a lot of experimental realizations-- you find that as you go towards low temperature, the size of domains that are ordered diverges according to this nice universal form. And let's say around 1995 or so, when people had these high temperature superconductors which are effectively 2 dimensional layers of magnets-- they're actually antiferromagnets, but they are still described by this [INAUDIBLE] with n equals 3-- there were lots of x-ray studies of what happens to the ordering of these antiferromagnetic copper oxide layers as you go to low temperatures. And this form was very much used and confirmed. OK, so that's really one thing that one can get from this analysis that has been explicitly [? confirmed ?] [? for ?] experiments. And finally, there's one case in this that I have not mentioned so far, which is n equals 2. And when I am at n equals 2, what I have is that the first and second order terms in this series are both vanishing. And I really at this stage don't quite know what is happening. But we can think about it a little bit. And you can see that if you are n equals 2, then essentially, you have a 1 component angle. And if I write the theory in terms of the angle theta, let's say, between neighboring spins, then the expansions would simply be gradient of theta squared. And there isn't any other mode to couple with. You may worry a little bit about gradient of theta to the 4th and such things. But a little bit of thinking will convince you that all of those terms are irrelevant. So as far as we can show, there is reason that essentially this series for n equals 2 is 0 at all orders, which means that as far as this analysis is concerned, there is a kind of a line of fixed points. You start with any temperature, and you will stay at that temperature, OK? Still you would say that even if you have this gradient of theta squared type of theory, the fluctuations that you have are solutions of 1 over q squared. And the integral of 1 over 2 squared in 2 dimensions is logarithmically divergent. So the more correct statement of the Mermin-Wagner's theorem is that there should be no long range order in d less than or equal to 2. Because for d equals 2 also, you have these logarithmic divergence of fluctuations. So you may have thought that you are pointing along, say, the y direction. But you average more and more, and you see that the extent of the fluctuations in angle are growing logarithmically. You say that once that logarithm becomes of the order of pi, I have no idea where my angle is. There should be no true long range order. And I'm not going to try to interpret this too much. I just say that Mermin-Wagner's theorem says that there should be no true long range order in systems that have continuous symmetry in 2 dimensions and below, OK? And that statement is correct, except that around that same time, Stanley and Kaplan did low temperature series analysis-- actually, no, I'm incorrect-- did high temperatures series of these spin models in 2 dimensions. And what they found was, OK, let's [? re-plot ?] susceptibility as a function of temperature. We calculate our best estimate of susceptibility from the high temperature series. And what they do is, let's say, they look at the system that corresponds to n equals 3. And they see that the susceptibility diverges only when you get in the vicinity of 0 temperature, which is consistent with all of these statements that first of all, this is a [? direct ?] correlation [? when it ?] only diverges at 0 temperature. And divergence of susceptibility has to be coupled through that. And therefore, really, the only exciting thing is right at 0 temperature, there is no region where this is long range order, except that when they did the analysis for n equals 2, they kept getting signature that there is a phase transition at a finite temperature in d equals 2 for this xy model that described by just an [INAUDIBLE], OK? So there is lots of numerical evidence of phase transition for n equals 2 in d equals [? 2, ?] OK? So this is another one of those puzzles which [INAUDIBLE] if we interpret the existence of a diverging susceptibility in the way that we are used to, let's say the Ising model and all the models that we have discussed so far, in all cases that we have seen, essentially, the divergence of the susceptibility was an indicator of the onset of true long range order so that on the other side, you had something like a magnet. But that is rigorously ruled out by Mermin-Wagner. So the question is, can we have a phase transition in the absence of symmetry breaking? All right? And we already saw one example of that a couple of lectures back when we were doing the dual of the 3-dimensional Ising model. We saw that the 3-dimensional Ising model, its dual had a phase transition but was rigorously prevented from having true long range order. So there, how did we distinguish the different phases? We found some appropriate correlation function. And we showed that that correlation function had different behaviors at high and low temperature. And these two different behaviors could not be matched. And so the phase transition was an indicator of the switch-over in the behavior of the correlation functions. So here, let's examine the correlation functions of our model. And the simplest correlation that we can think of for a system that is described by unit spins is to look at the spin at some location and the spin at some far away location and ask how correlated they are to each other? And so basically, there is some kind of, let's say, underlying lattice. And we pick 2 points at 0 and at r. And we ask, what is the dot product of the spins that we have at these 2 locations? And clearly, this is invariant under the [? global ?] rotation. What I can do is I can pick some kind of axis and define angles with respect to some axis. Let's say with respect to the x direction, I define an angle theta. And then clearly, this is the expectation value of cosine of theta 0 minus theta r. Now, this quantity I can asymptotically calculate both at high temperatures and low temperatures and compare them. So let's do a high T expansion. For the high T expansion, I sort of go back to the discrete model and say that what I have here is a system that is characterized by a bunch of angles that I have to integrate in theta i. I have the cosine of theta 0 minus theta r. And I have a weight that wants to make near neighbours to be parallel. And so I will write it as product over nearest neighbors, p to the K cosine of theta i minus theta j. OK, so the dot product of 2 spins I have written as the cosine between [? nearest ?] neighbors. And of course, I have to then divide by [INAUDIBLE]. Now, if I'm doing the high temperature expansion, that means that this coupling constant K scaled by temperature is known. And I can expand this as 1 plus K cosine of theta i minus theta j [? plus ?] [? higher ?] orders in powers of K of course, OK? Now, this looks to have the same structure as we had for the Ising model. In the Ising model, I had something like sigma i sigma j. And if I had a sigma by itself and I summed over the possible values, I would get 0. Here, I have something like a cosine of an angle. And if I integrate, let's say, d theta 0 cosine of theta 0 minus something, just because theta 0 can be both positive as [? it just ?] goes over the entire angle, this will give me 0. So this cosine I better get rid of. And the way that I can do that is let's say I multiply cosine of theta 0 minus theta r with one of the terms that I would get in the expansion, such as, let's say, a factor of K cosine of theta 0 minus theta 1. So if I call the next one theta 1, I will have a term in the expansion that is cosine of theta 0 minus theta 1, OK? Then this will be non-zero because I can certainly change the origin. I can write this as integral d theta 0 minus theta 1-- I can call phi. This would be cosine of phi from here. This becomes cosine of theta 0 minus theta 1 minus phi. And this I can expand as cosine of a 0 minus theta 1 cosine of phi minus sine of theta 0 minus theta 1 sine of phi. Then cosine integrated against sine will give me 0. Cosine integrated against cosine will give me 1/2. So this becomes 1/2 cosine of theta 0 minus theta 1 theta r. What did I-- For theta 0, I am writing phi plus theta 1. So this becomes phi plus theta 1 minus theta r. This becomes cosine of theta 1 minus theta r. This becomes theta 1 minus theta r. This becomes theta 1 minus theta r. OK, so essentially, we had a term that was like a cosine of theta 0 minus theta r from here. Once we integrate over this bond, then I get a factor of 1/2, and it becomes like a connection between these two. And you can see that I can keep doing that and find the path that connects from 0 to r. For each one of the bonds along this path, I pick one of these factors. And this allows me to get a finite value. And what I find once I do this is that through the lowest order, I have to count the shortest number of paths that I have between the two, K. I will get a factor of K. And then from the averaging over the angles, I will get 1/2. So it would b K over 2 [INAUDIBLE]. By this we indicate the shortest path between the 2, OK? So the point is that K is a small number. If I go further and further away, this is going to be exponentially small in the distance between the 2 spaces, where [? c ?] can be expressed in something that has to do with K. So this is actually quite a general statement. We've already seen it for the Ising model. We've now seen it for the xy model. Quite generally, for systems at high temperatures, once can show that correlations decay exponentially in separation because the information about the state of one variable has to travel all the way to influence the other one. And the fidelity by which the information is transmitted is very small at high temperatures. So OK? So this is something that you should have known. We are getting the answer. But now what happens if I go and look at low temperatures? So for low temperatures, what I need to do is to evaluate something that has to do with the behavior of these angles when I go to low temperatures. And when I go to low temperatures, these angles tend to be very much aligned to each other. And these factors of cosine I can therefore start expanding around 1. So what I end up having to do is something like a product over i theta i cosine of theta 0 minus theta r. I have a product over neighbors of factors such as K over 2 theta i minus theta j squared, [? as ?] I expand the Gaussian, expand the cosine. And in the denominator I would have exactly the same thing without this. So essentially, we see that since the cosine is the real part of e to the i theta 0 minus theta r, what I need to do is to calculate the average of this assuming the Gaussian weight. So the theta is Gaussian distributed, OK? Now-- actually, this [INAUDIBLE] I can take the outside also. It doesn't matter. I have to calculate this expectation value. And this for any Gaussian expectation value of e to the i some Gaussian variable is minus 1/2 the average of whatever you have, weight. And again, in case you forgot this, just insert the K here. You can see that this is the characteristic function of the Gaussian distributed variable, which is this difference. And the characteristic function I can start expanding in terms of the cumulants. The first cumulant, the average is 0 by symmetry. So the first thing that will appear, which would be at the order of K squared, is going to be the variance, which is what we have over here. And since it's a Gaussian, all higher order terms in this series [? will. ?] Another way to do is to of course just complete the square. And this is what would come out, OK? So all I need to do is to calculate the expectation value of this quantity where the thetas are Gaussian distributed. And the best way to do so is to go to Fourier space. So I have integral. For each one of these factors of theta 0 minus theta r, I will do an integral d2 q 2 pi squared. I have 1 minus e to the iq.r, which is from theta 0 minus theta r. And then I have a theta tilde q. I have two of those factors. I have d2 q prime 2 pi squared. I have e to the minus iq prime dot r theta tilde q prime. And then this average simply becomes this average. And the different modes are independent of each other. So I will get a 2 pi to the d-- actually, 2 here, a delta function q plus q prime. And for each mode, I will get a factor of Kq squared because after all, thetas are very much like the pi's that I had written at the beginning, OK? So what I will have is that this quantity is integral over one q's. Putting these two factors together, realizing that q prime is minus q will give me 2 minus 2 cosine of q.r divided by Kq squared. So there is an overall scale that is set by 1 over K, by temperature. And then there's a function [? on ?] [? from ?], which is the Fourier transform of {} 1 over q squared, which, as usual, we call C. We anticipate this to be like a Coulomb potential. Because if I take a Laplacian of C, you can see that-- forget the q-- the [? Laplacian ?] of C from the cosine, I will get a minus q squared cancels that. I will have [INAUDIBLE] d2q 2 pi squared. Cosine itself will be left. The 1 over q squared disappears. This is e to the iqr you plus e to the minus iqr over 2. Each one of them gives a delta function. So this is just a delta function. So C is the potential that you have from a unit charge in 2 dimensions. And again, you can perform the usual Gaussian procedure to find that the gradient of C times 2 pi r is the net charge that is enclosed, which is [? unity ?]. So gradient of C, which points in the radial direction is going to be 1 over 2 pi r. And your C is going to be log of r divided by 2 pi. So this is 1 over K log of r divided by 2 pi. And I state that when essentially the 2 angles are as close as some short distance cut-off, fluctuations vanish. So that's how I set the 0 of my integration, OK? So again, you put that over here. We find that s0.s of r in the low temperature limit is the exponential of minus 1/2 of this. So I have log of r over a divided by 4 pi K. And I will get a over r to the power of 1 over 4 pi K. And I kind of want to check that I didn't lose a factor of 2 somewhere, which I seem to have. Yeah, I lost a factor of 2 right here. This should be 2 because of this 2, if I'm using this definition. So this should be 2. And this should be 2 pi K. OK. So what have we established? We have looked as a function of temperature what the behavior of this spin-spin correlation function is. We have established that in the higher temperature limit, the behavior is something that falls off exponentially with separation. We have also established that at low temperature, it falls off as a power law in separation, OK? So these two functional behaviors are different. There is no way that you can connect one to the other. So you pick two spins that are sufficiently far apart and then move the separation further and further away. And the functional form of the correlations is either a power-law decay, power-law decay, or an exponential decay. And in this form, you know you have a high temperature. In this form, you know you have a low temperature. So potentially, there could be a phase transition separating the distinct behaviors of the correlation function. And that could potentially be underlying what is observed over here, OK? Yes? AUDIENCE: So where could we make the assumption we're at a low temperature in the second expansion? PROFESSOR: When we expanded the cosines, right? So what I should really do is to look at the terms such as this. But then I said that I'm low enough temperature so that I look at near neighbors, and they're almost parallel. So the cosine of the angle difference between them is the square of that small angle. AUDIENCE: Thank you. PROFESSOR: Yeah. OK? So actually, you may have said that I could have done the same analysis for small angle expansions not only for n equals 2, but also for n equals 3, et cetera. That would be correct. Because I could have made a similar Gaussian analysis for n equals e also. And then I may have concluded the same thing, except that I cannot conclude the same thing because of this thing that we derived over here. What this shows is that the expansion around 0 temperature regarded as Gaussian is going to break down because of the non-linear coupling that we have between modes. So although I may be tempted to write something like this for n equals 3, I know why it is wrong. And I know the correlation length at which this kind of behavior will need to be replaced with this type of behavior because effectively, the expansion parameter became of the order of 1. But I cannot do that for the xy model. I don't have similar reason. So then the question becomes, well, how does this expansion then eventually break down so that I will have a phase transition to a phase where the correlations are decaying exponentially? And you may say, well, I mean, it's really something to do with having to go to higher and higher ordered terms in the expansion of the cosine. And it's going to be something which would be very difficult to figure out, except that it turns out that there is a much more elegant solution. And that was proposed by [? Kastelitz ?] and [? Thales. ?] And they said that what you have left out in the Gaussian analysis are topological defects, OK? That is, when I did the expansion of the cosine and we replaced the cosine with the difference of the angles squared, that's more or less fine, except that I should also realize that cosine maintains its value if the angle difference goes up by 2 pi. And you say, well, neighboring spins are never going to be 2 pi different or pi different because they're very strongly coupled. Does it make any difference? Turns out that, OK, for the neighboring spins, it doesn't make a difference. But what if you go far away? So let's imagine that this is, let's say, our system of spins. And what I do is I look at a configuration such as this. Essentially, I have spins [? radiating ?] out from a center such as this, OK? There is, of course, a lot of energy costs I have put over here. But when I go very much further out, let's say, very far away from this plus sign that I have indicated over here, and I follow what the behavior of the spins are, you see that as I go along this circuit, the spins start by pointing this way, they go point this way, this way, et cetera. And by the time I carry a circuit such as this, I find that the angle theta has also rotated by 2 pi, OK? Now, this is clearly a configuration that is going to be costly. We'll calculate its cost. But the point is that there is no continuous deformation that you can make that will map this into what we were expanding around here with all of the cosines being parallel to each other. So this is a topologically distinct contribution from the Gaussian one, the Gaussian term that we've calculated. Since the direction of the rotation of the spins is the same as the direction of the circuit in this case, this is called a plus topological defect. There is a corresponding minus defect which is something like this. OK, and for this, you can convince yourself that as you make a circuit such as this that the direction of the arrow actually goes in the opposite direction, OK? This is called a negative sign topological defect. Now, I said, well, let's figure out what the energy cost of one of these things is. If I'm away from the center of one of these defects, then the change in angle is small because the change in angle if I go all the way around a circle of radius r should come back to be 2 pi. 2 pi is the uncertainty that I have from the cosines. So what I have is that the gradient of theta [? between ?] neighboring angles times 2 pi r, which is this radius, is 2 pi. And this thing, here it is plus 1. Here it is minus 1. And in general, you can imagine possibilities where this is some integer that is like 2 or minus 2 or something else that is allowed by this degeneracy of this cosine, OK? So you can see that when you are far away from the center of whatever this defect is, the gradient of theta has magnitude that is n over r, OK? And as you go further and further, it becomes smaller and smaller. And the energy cost out here you can obtain by essentially expanding the cosine is going to be proportional to the change in angle squared. So the cost of the defect is an integral of 2 pi rdr times this quantity n over r squared multiplied by the coefficient of the expansion of the cosine, or K over 2. And this integration I have to go all the way to ends up of my system. Let's call it l. And then I can bring it down, not necessary to the scale of the lattice spacing, but maybe to scale of 5 lattice spacing or something like this, where the approximations that I have used of treating this as a continuum are still valid. So I will pick some kind of a short distance cut-off a. And then whatever energy is at scales that are below a, I will add to a core energy that depends on whatever this a is. I don't know what that is. So basically, there is some core energy depending on where I stop this. And the reason that this is more important-- because here I have an integral of 1 over r. And an integral of 1 over r is something that is logarithmically divergent. So I will get K. I have 2 cancels the 2. I have pi en squared log of l over a, OK? So you can see that creating one of these defects is hugely expensive. And energy that as your system becomes bigger and bigger and we're thinking about infinite sized systems is logarithmically large. So you would say these things will never occur because they cost an infinite amount of energy. Well, the thing is that entropy is also important. So if I were to calculate the partition function that I would assign to one of these defects, part of it would be exponential of this energy. So I would have this e to the minus this core energy. I would have this exponential of K pi n squared log of l over a. So that's the [INAUDIBLE] weight for this. But then I realize that I can put this anywhere on the lattice so there is an entropy [? gain ?] factor. And since I have assigned this to have some kind of a bulk to it as some characteristic size a, the number of distinct places that I can put it is over the order of l over a squared. So basically, I take my huge lattice and I partition it into sizes of a's. And I say I can put it in any one of these configurations. You can see that the whole thing is going to be e to the minus this core energy. And then I have l over a to the power of 2 minus pi K n squared, OK? So the logarithmic energy cost is the same form as the logarithmic entropy gain that you have over here. And this precise balance will give you a value of K such that if K is larger than 2 over pi n squared, this is going to be an exponentially large cost. There is a huge negative power of l here that says, no, you don't want to create this. But if K becomes weak such that the 2, the entropy factor, [? wins ?], then you will start creating [INAUDIBLE]. So you can see that over here, suddenly we have a mechanism along our picture over here, maybe something like K, which is 2 over pi, such that on one side you would say, I will not have topological defect and I can use the Gaussian model. And on the other side, you say that I will spontaneously create these topological defects. And then the Gaussian description is no longer valid because I have to now really take care of the angular nature of these variables, OK? So this is a nice picture, which is only a zeroed order picture. And to zeroed order, it is correct. But this is not fully correct. Because even at low temperatures, you can certainly create hairs of plus minus defects, OK? And whereas the field for one of them will fall off at large distances, the gradient of theta is 1 over r, if you superimpose what is happening for two of these, what you will convince yourself that if you have a pair of defects of opposite sign at the distance d that the distortion that they generate at large distances falls off not the as 1 over r, which is, if you like, a monopole field, but as d over r squared, which is a dipole field. So whenever you have a dipole, you will have to multiply by the separation of the charges in the dipole. And that is compensated by a factor of 1 over r in the denominator. There is some angular dependence, but we are not so interested in that. Now, if I were to integrate this square, we can see that it is something that is convergent at large distances. And so this is going to be finite. It is not going to diverge as the size of the system, which means that whereas individual defects there was no way that I could create in my system, I can always create pairs of these defects. So the correct picture that we should have is not that at low temperatures you don't have defects, at high temperatures you have these defects spontaneously appearing. The correct picture is that at low temperatures what you have is lots and lots of dipoles that are pretty much bound to each other. And when you go to high temperatures, what happens is that you will have these pluses and minuses unbound from each other. So if you like, the transition is between molecules to a plasma as temperature is changed. Or if you like, it is between an insulator and a conductor. And how to mathematically describe this phase transition in 2 dimensions, which we can rigorously, will be what we will do in the next couple of lectures. |
CS_285_Deep_RL_2023 | CS_285_Lecture_17_Part_2_RL_Theory.txt | in the second part of today's lecture we're going to do some theoretical analysis of a model free reinforcement learning algorithm that is kind of similar to one that we might actually want to use namely for the queue iteration now of course we know that real fitted q iteration in general is not guaranteed to converge so we're going to use a kind of an idealized model of fit accuration that is you know a little more simplified than the real method but is amenable to theoretical analysis okay so here is our abstract model of exact cue iteration in exact key iteration we're going to at every iterate set qk plus 1 to be equal to some operator t times qk the operator t is going to be the bellman optimality operator so t times q is equal to r plus gamma times p times the max over a of q okay so the the max operator here is kind of weird because q is an sa length matrix and we're going to say that max a of that essay length vector sorry results in a new vector that is of length s so it's max a but it's kind of this blockwise max where over all the actions correspond to the same states it computes one entry which is the max over those actions so this is not a full max over the whole vector q it's actually this block max so there's a little bit of notational convenience um anyway don't worry too much if my notation here is confusing tq is basically exactly what you think it is it's just the thing that takes a q function and performs a max bellman backup now this exact cue iteration here's how we're going to model approximate fitted q iteration we're going to say that q hat k plus 1 is going to perform some kind of minimization over q hat to minimize q hat minus t hat q hat k so there are going to be two sources of error here one is the t hat is not the same as t i'll talk about that in a second and the other one is that the minimization will not be exact either so q hat k plus 1 will not actually be equal to t hat q hat k so we're going to have an approximate development operator and we're going to have an approximate minimization so the approximate bellman operator t hat q is going to be equal to r hat plus gamma p hat let's unpack this a little bit so r hat is in this case for the purpose of this analysis just the reward averaged over all the times when we've seen the state essay so the value in our hat for sa is just the sample average one over the number of times we've seen sa times the sum over all of our samples of ri for every ri that whose siai corresponds to sa so it's basically exactly what you think it is it's just the average reward we've seen for that state action tuple and p-hat basically the same as before is the number of times we've seen the transition sas prime divided by the number of times we've seen sa now this might look like the same kind of idea that we had in the model based analysis before but note that these are not models this is the effect of averaging together different transitions in the data so what we would do in a real physical iteration algorithm is we would have uh different losses for every single sample so for every sample we would have you know something like ri we would have something like qsi ai minus ri plus gamma uh max a prime qsi prime a prime squared or some other difference you know maybe not squared maybe absolute value and we would average them all together so we're doing this idealized model is we're basically saying that the effect of averaging together these different losses kind of looks like doing a backup under this empirical model so p-hat and r-hat are basically empirical models of the reward and transitions um technically this is what you would get if you were to average together all the target values for a given state action tuple say right so if you've seen a given essay five times and you average together the target values for those five instances of that same essay you would get exactly the same thing as if you were to employ this version of r hat and p hat okay now i'm saying all of this but this is just kind of justification for this idealized model so from here on out we're just going to deal with our hats and p hats so all that explanation was just to justify why viewing approximate fitter q iteration as doing this t-hat backup is reasonable it's reasonable because if you were to just average together the target values for that state action tuple over all the samples that contain that state action tuple this is exactly what you would get okay um now at this point i you know what we're actually going to see in our analysis is that the fact that r hat and p hat are not exactly the same as r and p is going to induce some error and we call that sampling error because the reason for the error is that we do is that r hat and p hat are inexact because we have a finite number of samples if we had an infinite number of samples that our hat would be equal to r and p-hat would be equal to p but for a finite number of samples we incur sampling error but that's not the only source of error this minimization will also be inexact so we won't actually be able to get q hat k plus 1 the perfectly matched t hat q hat k because it's physical iteration and there's some kind of function approximation or some kind of you know inexact learning going on so we need some model for this error and an important thing here is which normal we're going to use so we saw before in our discussion of q iteration back in the beginning of the course that if we do this with squared error the problem is that we can't even prove convergence of the algorithm now that's a significant problem but even though we can't prove convergence of the real algorithm maybe we can assume some kind of idealized algorithm and at least study how error in this idealized algorithm depends on the problem parameters so we need to idealize this a little bit and in order to idealize it we're going to assume that we're actually minimizing the infinity norm and furthermore we're going to assume that we can get the infinity norm to be less than or equal to some constant so we'll assume that every iteration we compute this approximate belmont backup t hat q hat k and then we fit the new q hat k plus 1 to this t hat q hat k with an infinity norm that is less than or equal to some constant and i'll come back to this later so this is this assumption is kind of made out of convenience because it's difficult to do this with the l2 norm okay so which questions are we going to want to study well one question is as the number of theta q iteration iterations approaches infinity what is q hat k actually approach in particular how much does q hat k differ from q star asymptotically as we take infinitely many approximate physical iteration iterations well where will our errors come from they'll come from two sources one is that t is not equal to t hat so that's sampling error and the other one is that q hat k plus one is not equal to t hat q hat k and that's we call it approximation error basically when we approximate the backed up previous q function meaning the target values with some new q function we incur some approximation error and we can try to quantify that okay so let's first analyze just the sampling error so we'll just analyze the problems we get from the fact that t is not equal to t hat and this is going to look pretty similar to what we saw in the previous section so we're basically going to figure out how the real thing that we're doing the one that has t hat q hat is different from what we would have gotten if we used t times q hat so in particular we want to understand this difference how different will t hat q be from t q for some q function q we don't care what q is at this point we just want to understand the difference between applying t hat to it and applying t to it so if we were to write that out well let's just substitute in the definition of t hat and t in there and we'll collect the terms so we'll collect the r terms and the next value terms we get r hat minus r plus gamma times the expected value of the max under p hat minus the expected value of the max under p and by the triangle inequality as usual we can bound this the norm of this sum with the sum of their norms so we get r hat minus r the norm of that plus gamma times the norm of the difference of expectations now the first value here is exactly the estimation error of a continuous random variable what did we learn about that allows us to bound the approximation error of a continuous random variable well this is exactly hufting's inequality so if we just directly plug in the formula for huffington's inequality um we get that the difference between r hat and r is just going to be less than or equal to two times the maximum possible reward that's basically the range of rewards the b plus minus b minus times the square root of 1 over delta over 2n so our familiar bound the error will scale as 1 over root n for the second part this is just the sum over all next states of the difference of p hat and p times the maximum over the action of q s prime a prime and we can bound that by replacing the max over a prime with a max over s prime and a prime so that'll make this this q term independent of s prime right because if you average together the uh you know the values of some vector that's the same that's bounded by summing together the maximum values of that vector because any entry in the vector is less than or equal to its maximum and now looking at this equation hopefully you'll all recognize this as the total variation divergence between p hat and p times some quantity that depends on q some constant quantity and in particular that constant quantity is just the infinity norm of q so this is exactly equal to the total variation divergence between p hat and p times the infinity norm of q and we already had a bound for that so that's basically going to use that concentration inequality for estimating categorical distributions and therefore this is bounded by some constant times the infinite norm of q times the square root of the log of 1 over delta divided by n so again it scales as root n and there's some constant that comes from the dimensionality and so on and the infinity norm of q all right so that's sampling error and this is you know more or less following the same logic as we had in the previous section so what we have is that the difference between applying these this empirical bellman backup the approximate backup and the true backup is bounded by two terms one that depends on the error in the reward and the other one that depends on the error in the dynamics so that means that the infinity norm of the difference between t hat q and t q is basically also going to have this form just with slightly different constants and with some terms that depend on the dimension on the number of states in actions and we get that by using the union bound remember the reason that we need to use the union bound is that these inequalities all hold with probability 1 minus delta so if you have n different events then you need to bound the probability of all of those events happening and that's what the union bound does okay but you don't really have to worry about this all this really changes is the constants okay so that's sampling error now what about approximation error well let's make some assumption let's assume that when you fit q hat k plus 1 to the target values which we're going to say are t q hat k your fit has an infinity norm error of at most epsilon k and for now we're going to analyze the case where we have an exact backup but we'll come back to the approximate backup later so let's just pretend for a minute that our backup is exact so there's no difference between t and t hat and we're just studying the effect of error in the fit so if we had an exact fit if we had like an exact tabular q iteration method then q hat k plus 1 would be exactly equal to t q hat k and now we're going to assume that it's not exact that it incurs some error and that that error is bounded in the infinity norm now this is a strong assumption in reality if you're doing supervised learning your error is not bounded in the infinity norm it's going to be bounded in something like you know a weighted l2 norm in expectation under some distribution we're going to assume it's bound in the infinity norm which means that for the worst possible state action tuple your error is at most epsilon k so this is a strong assumption but it will make this very convenient okay so now what we're going to try to understand is the difference between uh q hat k at some iteration k and the real q star again in the infinity war so here's how we're going to do it we're going to use the same trick as before we're going to subtract and add some quantity and the particular quantity that we're going to put in is t times q hat k minus 1. the reason is that well we're fitting q hat k to t q hat k minus 1 so if we put that in that's the quantity we can bound so we're going to subtract it and we're going to add it and then we'll group these two terms together so we're going to have one term which is q hat k minus t q had k minus 1. so that that's convenient because that's the quantity we're going to be bounding by assumption and then we have this other term which consists of the backup of q hat k minus 1 minus the backup of q star now q star is the fixed point of t so you can always replace q star with t q star which is what i did on this line and then we'll apply the triangular quality again to bound the infinity number of the sum by the sum of their infinity norms and the first term here q hat k minus t q hat k minus 1 is is just bounded by epsilon k minus 1 by our assumption at the top the indexing is off by 1 that's why it's k minus 1. so that leaves us with a second term now for the second term we're going to recognize an interesting fact that we saw way back in the day when we first learned about q learning which is that the bellman backup is a contraction the fact that the bell and backup is a contraction the infinity norm means that the infinity norm of of uh what you get by applying t to two different q functions is less than or equal to gamma times the infinite norm of the difference between those q functions so using the fact that t is a contraction contraction and gamma we can bound this by gamma times q hat k minus 1 minus q star and now what we've done is we've related q hat k minus q star recursively to q hat k minus 1 minus q star but with the addition of this little error term epsilon k minus 1. okay so now we're going to unroll this recursion so applying the same thing again to q hat k minus 1 minus q star we bound the whole thing by epsilon k minus 1 plus gamma epsilon k minus 2 plus gamma squared times the difference then we can do it again and we get gamma squared times epsilon k minus 3 plus gamma cubed and so on and so on if we go all the way back to the beginning then we end up bounding the whole thing by this gamma discounted sum from i equals 0 to k minus 1 gamma to the i times the corresponding epsilon plus gamma to the k times the difference between q hat 0 and q star now this tells us something very interesting and also very useful which is that the f the more iterations we take the more we essentially forget our initialization because as k goes to infinity this gamma to the k term will vanish because gamma is less than 1 which means that the effect of our starting point q hat 0 is going to vanish so what that means is that if we take the limit as k goes to infinity uh the second term gamma to the k vanishes because gamma to the k approaches zero and for the first term we're going to simplify it a little bit we're going to replace all those epsilon k minus i minus one terms with just the maximum epsilon we get over all the iterations and that's probably reasonable to do because if our fitting error is bounded for every iteration we'll just say that we can also bound it over all iterations and now we get our familiar geometric uh series the sum from i equals zero to infinity of gamma to the i times some constant uh and so that's equal to one over one minus gamma times the infinity norm of epsilon where epsilon is a big vector with k dimensions where every dimension has the error at that iteration so that's pretty neat now we see how error scales uh for just the approximation error so if we had if we incur some epsilon fit every step then the total error we'll get will be epsilon times one over one minus gamma so the longer our horizon essentially the more error we get which intuitively kind of uh you can think of as saying that every time you back up which is every iteration of fit a queue iteration you incur some additional error so since you're the number of backups you need to make is equal to your horizon that's the order of the approximation error that you'll see so now let's put these two things together we've got our sampling error and that quantifies the difference between t hat and t and we've got our approximation error which is how much q hat k plus 1 will differ from uh t q hat k uh and they'll differ due to sampling error and due to the approximation error so essentially what we're going to do is we're going to subsume the sampling error inside the epsilons and that will let us connect these two parts up so put stated another way that bound from the previous slide can also be rewritten as the limit as k goes to infinity of q hat k minus q star is less than or equal to one over one minus gamma times the max over all of your iterations of the difference between q hat k and t q hat k minus one right so this this contains both kinds of errors because there's a t in there so if you're actually backing up using t hat instead of t you'll have an error there and it contains the approximation error due to imperfect fit so let's just examine what this quantity is q hat k minus t q hat k minus 1 we're going to put in uh t hat q hat k minus 1 so we'll subtract and add up the same trick as before we'll again group the terms so we're bounding this whole thing by q hat k minus t hat q hat k minus 1 plus t hat qk minus 1 minus t q hat k minus 1. so the second term is basically taking care of the sampling error and the first term is taking care of the approximation error so we know the first term is that epsilon k and the second term is that uh big sampling error bound up above so we can do is we can just take these two terms and plug them in here uh and we can use that to calculate the difference between q hat k and q star and the limit as k goes to infinity and will be one over one minus gamma times a bunch of terms basically a sum of three terms two of them coming from sampling error and one coming from approximation error so here's what we had on the previous slide we can see here that error compounds with the horizon over iterations and due to sampling notice that in the sampling error the second term actually is also of order one over one minus gamma because q infinity is on the order of r max over one over one minus gamma we discussed this before we talked about how the value functions and q functions uh basically their magnitudes are the reward times the horizon so it's r max over one minus gamma so if you imagine what will happen if we substitute in epsilon k plus sampling error into that second equation you have a one over one minus gamma term in front and then you have a sum of three terms one of which itself also has a one over y minus gamma term in it so the overall order of the error will be quadratic in one over one minus gamma just like we saw in part one now so far we needed strong assumptions specifically infinity norm assumptions on the error that we're incurring so you know that is a fairly strong assumption that is not always going to hold more advanced results can actually be derived with p norms under some distributions so infinity norms are not really realistic for practical learning algorithms it is possible to do some analysis with p norms and you can learn more about that in the rl theory book referenced at the bottom of the slide uh basically this analysis studies norms of this form where um the p mu norm is just the expected value under mu of the difference raised to the power p and then the whole thing is raised to 1 over p so if p is equal to 2 this is actually the quadratic belmont era that we're used to so there's something else that we can do with that but we need some assumptions there too to avoid this non-convergence session you |
CS_285_Deep_RL_2023 | CS_285_Lecture_13_Part_5.txt | all right next i'm going to discuss algorithms for exploration in dbrl that draw on the ideas from posterior sampling or thomson sampling that i've discussed before so as a reminder thompson sampling or posterior sampling in a banded setting refers to the case where we actually estimate a model of our bandit so if theta the thetas parameterize the distribution over the bandit's rewards we would actually maintain a belief over theta and then at each step of exploration we would sample thetas based on our beliefs and take the action that is the r max of the band that described by that corresponding model so in the dprl setting we could ask well what is it that we should sample and how do we represent the distribution so in the band setting there isn't really a choice to be made the only thing that's unknown is the model of the rewards uh and then you you know that model is pretty simple so it's not too hard to represent in the deep rl setting this is of course a lot more complicated so in the banded setting p hat theta one through theta n is a distribution of rewards the analog and mdps would be a q function because in a bandit the instantaneous reward is basically all you need to know so you can choose your action as the arc max of the reward in mdp we don't choose our action as the arcmax of the reward we choose the action as the argmax of the q function so the way that we could adapt adopt posterior sampling or thompson sampling and this is not the only way but this is one particularly simple way is to sample a q function from a distribution of our q functions and then act according to that queue function for one episode and then update your q function distribution and then repeat now since the q learning is off policy we actually don't care which queue function was used to collect that episode we can train all you know our whole distribution over q functions on the same data so it's okay if we use a different exploration strategy or a different policy for every single rollout how do we represent a distribution over functions well one of the things we could do is we could think back to the model based rl lectures where we learned how we can represent distributions by using bootstrap ensembles and essentially try the same thing so given a data set d we re-sample that data set with replacement n times to get n data sets d1 through dn and then we train a separate model on each of those data sets which basically means we're trying to sub our q function on each of those data sets and then to sample from the posterior we simply choose one of those models at random and then use that model so here is a little illustration that shows uncertainties intervals estimated by these bootstrap neural nets uh now of course training and big neural nets is expensive how can we avoid it well we use again the same trick that we used in the model-based rl lectures uh which is to uh you know not do the um the resampling of the replacement just use the same data set and furthermore uh one of the things we could do and this is described in this paper at the bottom called deep exploration by bootstrap dqm is we can actually train one network with multiple heads now that's not ideal because now the the outputs of those different heads will be correlated but in practice they might be different enough to give us a little bit of variability for exploration so this might not be a great way to estimate a very accurate posterior but it might be good enough to ensure that each of those heads has slightly different behavior by the way for those of you that are not familiar with the deep learning terminology when i say multiple heads what i mean is all of the layers in the network are shared except for the last layer so there are multiple copies of the last layer each of which we refer to as a different head all right so why does this work well exploring with random actions like for example by using something like epsilon greedy results you know one problem results in is that you kind of end up oscillating back and forth and you might not go to a coherent or interesting place just through random oscillation as an example here is one of the kind of tricky atari games this is called seaquest in seaquest you control the submarine and for some reason what you're supposed to do is you're supposed to shoot the fish and like pick up the divers or maybe it's the other way around i don't know but something ecologically very unfriendly but the submarine runs out of oxygen so if it stays under water too long then you lose because you're out of air so in order to play the game properly what you're supposed to do is shoot all the fish and then once the oxygen bar gets too low then come back up and recover some air the problem is that if you're exploring randomly then once you're at the bottom of the ocean it's extremely unlikely that you will randomly surface because that requires randomly pushing the up button many times in fact you're exponentially unlikely to resurface once you're at the bottom and due to the mechanics of the game it's actually a little bit easier to play if you go a little deeper down so this makes surfacing for air very hard to discover through epsilon greedy exploration when you explore with random q functions you commit to a random but internally consistent strategy for an entire episode so the q functions might make slightly different conclusions for example one of the q functions in your ensemble might decide that going deeper is good another one might decide that going up is good and if you just randomly pick the one that decided that going up is good then it will go up consistently and you will actually surface forever you won't serve a sprayer on every episode but it's more likely to happen for one of your random samples so then you would get a strategy where you would actually go up in the experiments in the paper they do show that this bootstrap trick does actually help a fair bit on some games although not others it doesn't work very well on montezuma's revenge at all for example in general this method doesn't work quite as well as good count based exploration or pseudo counts but it has some major advantages so it doesn't require any change to the original reward function in fact at convergence you would expect that all of your q functions in your ensembl will be pretty good and you don't actually have to tune any hyper parameters to trade off exploration and exploitation so it's quite simple and convenient it's a it's a very unintrusive way to do exploration very good bonuses often do quite a bit better though so this is not the best exploration methods method in practice it's actually not used very much simply because if you really have a difficult exploration problem assigning bonuses will usually work better but this is a fairly heavily study class of exploration algorithms and it's worth knowing about |
CS_285_Deep_RL_2023 | CS_285_Guest_Lecture_Aviral_Kumar.txt | uh he previously did his speak here with us at UC Berkeley and uh actually a lot of the materials that we covered in our offline RL lectures was work that was done in part by AVL and also pretty much all the materials in our RL Theory lecture was just copied from a lecture that a made so he's contributed to a very large portion of this class and today he'll talk tell us about pre-training and utilizing large models with offline RL thank you um yeah so today I'll talk to you about offline reinforcement learning for pre-training and utilizing large models um and these are large models not in the context of NLP or or language but uh for robotics for many decision- making problems down the line so um yeah so you know you guys have all studied offline reinforcement learning I think there are two lectures on this now um the the the idea behind offline AR is simple you you want to not run you don't want to learn by online trial and interaction with the environment but rather use existing interaction ction data for learning policies that can maximize reward so this is the standard Paradigm you looked at many algorithms for doing this model based algorithms Q learning etc etc but I not go into those algorithms here but this is the general Paradigm that all of those algorithms try to follow now um what this kind of reminds me uh sort of when I think about this in the context of General machine learning is the standard ml pipeline of U taking some training data in this case it was existing offline data training a model on this data and then deploying this model down on the real task so this is this kind of a paradigm of offline RL add to the standard or typical machine learning pipeline of data model train it with some objective and then deploy but if you look at what currently um we do in machine learning uh what is becoming more and more more popular and more and more utilized out there is a slightly different Paradigm where the picture changes quite a bit so rather than going from data directly to data to rather than using data to train a and then deploying the model now we want to do something called pre-training which is take lots and lots of data which is probably not very related to your Downstream task but we want to train a general model in all of this data and then when it comes to using this model we want to run this model not directly in the real world or directly on your real problem but rather we want to fine tune this model to a downstream task that we care about so we want to learn generalist models by pre-training and then utilize these General Lis models um via some kind of finetuning on a downstream task and you know the class example of this is large language models you know all all sorts of foundation models that you in so what I wanted to talk about today was you know how can you move to a similar Paradigm when you think about offline when you think about decision- making problems how can offline reinforcement learning give you a way to realize this sort of a paradigm for decision making and control type of tasks so what I want to do now is not go away from this Paradigm of taking existing data and and producing policies that maximize reward but rather think of offline r as this other Paradigm which can take not just your given task data but all possible relevant data for your problem this could include like you know any kind of uh robot data that exists out there any kind of gameplay data that exists out there any kind of data for from hospitals for Hospital decision- making problems Etc so all of that data put together now to now not just produce policies that are good at maximizing reward but rather good pre-trained initializations good features good representations good whatever you can think of which is useful for now find you into a downstream scenario that you care about so what I want to talk about in today's talk is how can you realize such a pipeline how can you think of offline AR methods and how you can extend all those recipes you have learned about in this class to enable something like this my picture instead of that picture above so um so more specifically Let's uh you know we we dissect um um this this sort of high level goal into three different parts and uh to motivate those three different parts we look at what um different components in this picture look like so if I want to go from this picture of taking some data training a model on it on that and producing a know Downstream policy to the second picture what do my components look like the first component is um being able to learn from arbitary sources of data so if I want to realize this picture I want to take not just data for my given task but arbitrary data sources and and train on it and so one concrete insation of this that we um talk about today is this setting of using human um video data for robotics so if I want to train a robot but I don't just want to use robot data but lots and lots of internet internet based video data how can I use it for learning good policies I'll talk about that well I also talk about scaling up so you know when there's more and more data you want to use bigger and bigger models so how can you enable these offline reinforcement learning methods to scale up to uh be amable to training large and large models so I'll talk about some stuff in that along that AIS and I'll talk about um some uh initial stuff you've done for building fine tuning algorithms so algorithms that can allow us to take General initializations and then um you know train them with limited amount of data for your given Downstream problem uh to make your your initialization be better and better at that given task that you care about so data um scaling and algorithm these are the three topics that we'll talk about today so let's get started um I I decided to switch the order a bit so I'm going to start with with scaling first and then we'll talk about data and and find your name at the so um scaling up right um so um if you look at scaling in in reinforcement learning um here's a plot that I copied from the S clear paper which at this point is super super old um and these are all these all these dots that you see up on this slot are basically a number of sell superwise learning methods which were back which were out there stateof ofthe art at the time when the slear paper came out in um if you look at most of work in reinforcement learning community um all of them usually train models that are very small so they train models that all lie within this Gray colored little box out there on this plot whereas supervised learning or just normal cell supervisor learning has gone Way Beyond this this particular box so what I'm going to talk about today is um you know obviously not that revolutionary it doesn't you know move this box all the way from here to to massive models but it does allow us to go from uh whatever we had back in the RL community at that time to models that are much bigger about double the size of the models that we that we could trade on and remember that this was all done on not that much compute so we did not obviously have compute to train the very very big models but you know something better than what existed out at the time uh when we were doing this work so um uh so before I go into the the the technical content I just quickly set up some notation and then we'll discuss the technical content so U remember that here we are dealing with this offline data set or this interaction data that already exists um we'll assume that this data consists of um tup of four elements so here those four elements are the observation xay or the state that is visible to your learning algorithm AI which is an action that was taken by the behavior policy so the data collection policy in your data set rfx is your instantaneous reward value so this is the reward value in your in your data set for this particular transition and XI +1 is the next observation or the next state um which was obtained when you executed this action at that particular state xti so this is just a transition that exists in typical replay buffers in in off policy Q learning style algorithms so we have access to a data set that is organized in the form of these transition tobs XI AI R and XI okay so back to scaling now so to start motivating why or you know what is even U the the the the thing to do in scaling like should I just throw more data in a bigger model and the method would work to understand if that simple recip just works what we did was we tried to um run a simple experiment where uh we said okay rather than uh doing the standard domain of Atari games which you guys must have for sure done in the homework assignment I believe rather than training policies for one single Atari game what if we try to just make the problem harder such that it requires larger models to be able to perform and we made that problem Harder by now instead of training one single policy on one game we are training one single policy to play multiple games together so this is just a simple like you know baby step instantiation of scaling up the problem by simply trying to train one policy to play multiple games together um now just this incensation actually gives you quite a bit of complexity so you now need a method that can deal with or that can train on about two billion data points because you can concatenate all these data sets that exist for each game and you'll get two billion data points easily uh and all this data is quite suboptimal so you want to be able to take this data and produce a good and a better policy hopefully than anything that you have seen in this data so it's it's a challenging problem turns out that if you take this problem um and Benchmark some very simple sort of U RL and and control methods you do find that when you simply do imitation learning or filter limitation learning side approaches there's actually quite a bit of benefit that you had when you scale up your model so this plot it's plotting the performance increase that happens when you go from a smaller model to a larger model for the learning algorithm and the blue uh the blue bar that you see here is a supervised learning style algorithm it's based on imitation so um you see that when you go from a smaller model to a large model there's a huge boost in performance um with imitation learning style methods in this particular scenario but if you take off the shelf offline RL matters in this case it's conservative Q learning um I did not put the equation for it here but because I I know you guys have covered this in the lecture you find that actually with offline is methods the amount of performance increase as a result of scaling your model capacity is quite low um and in fact if you do this a lot your performance might even start to degrade when you use a larger model and for those of you who are interested in the details the model here is basically going from a smaller reset to a larger reset because uh these games require you to train on Pixel images and convert that into discrete action so you can use a reset architecture in this case that's for so clearly um you know the thing that is visible here is that performance of the training algorithm um or performance of the policy that you get starts to decrease when you scale up model capacity for offline RL methods whereas that's not quite the case with just imitation learning methods as you see up there so um this was kind of a counterintuitive finding and we didn't know what to do about it um so we tried to sort of um ask some basic questions that can help us understand why supervised learning methods can can scale up well but RL methods may not be able to so in particular um you know rather than thinking about like absolute reasons for why RL or offline RL methods may be hard to get to work with large models we try to do a comparative understanding of why supervised learning methods can scale well but offline methods may not be very good at scaling well so we try to do a comparative analysis with the hope that okay if we can understand why this difference exists we can hopefully somehow compensate for the difference in our learning algorithm when we run these offline ARS so if you see um one of the um one of the lines of theoretical machine learning literature they propose an explanation for why with very large models with very large or very overparameterized models meaning having many many more parameters than the intrinsic dimensions of the data why even in such cases supervised learning methods can work pretty pretty well and this is uh this theory of what's called implicit regularization so uh the idea is like if you are training a model with supervised learning that's called let's say that a model has parameters Theta and you're minimizing some loss function L Theta um and the biggest assumption here is that Theta is very very big so you have many many more parameters meaning you have a very large model and you're training this model on your data turns out that if you do this uh with gradient descent and more particularly stochastic gradient descent in supervised learning it turns out that you end up finding solutions that don't just minimize this loss L Theta but that minimize a regularized version of this loss so you end up finding solutions that minimize L theta plus this R Theta and this R Theta is not something that you added during training but it's just a what's called an implicit regularizer it's a regularizer that comes up because your learning procedure in this case stochastic gradient descent ends up preferring certain Solutions uh in your parameter landscape so think of it in this way there are many possible solutions you're learning and converge to but you end up finding solutions that minimize this regularizer because your Optimizer in this case SGD ends up finding those Solutions over other possible solutions so um one of the sort of like you know intuitive characterizations of this implicit regularizer term is U the is is sort of the preference towards Simple Solutions so if you take a extremely huge extremely hugely parameterized linear regression problem let's say so it's a supervised learning problem what ends up happening typically is you would end up finding Solutions with a low parameter Norm with a low Norm of the parameter the linear regression parameters that you're train and that's an example of this implicit regularization phenomena so you're finding solutions that are in some way simple and these Simple Solutions tend to generalize well even though you have a lot and lot of parameters in your training um you know you know in your your model at your so this is one of the explanations for why you can still find good Solutions with supervised learning even though you have a very large model without overfitting without running into any other spous issues because this implicit regular is a ends up finding good Solutions any questions here by the way um I realized that this might be something that's probably not covered in this class so any solution any questions here actually okay uh great uh so yeah so so there's one answer for why you know supervised learning methods can work pretty well with large models which is that this implicit regularization phenomenal um ends up referring solutions that perform well so what we tried to do in this line of work was to kind of go back now um and see what this Theory says for the case of offline reinforcement so our hope as I said earlier was that if we can figure out the Delta between offline RL and supervis learning we can hope to sort of address this problem um with offline RL and skaing by taking insights from what has made supervised learning work pretty well with large models so uh what we did was we um tried to kind of theoretically derive what the implicit regularizer looks like for the case of offline and in particular we are considering these methods that that train Q functions in this case so it's so you have a new network that produces a q function as output taking as input the state and the action and one piece of notation I'm going to Define here I'm going to consider the last but on layer features of this network the last but one layer activations of this network as feta of X so they are the features that you learned and this is sort of maybe somewhat adop you can features in different ways as well but for the context of today's talk for Simplicity of understanding let's stick to that definition as what we'll call features so turns out um that in the case of reinforcement learning there's a there's a there's a major difference between supervised objectives and and Q learning objectives that cause this regularization to be very different this regularization effect to be very different differently manifested for RL and supervisor and to understand where that difference comes from I'm going to write down a few equations here so typically when you train with Q learning you take a q function a network Q Theta there and you try to regress it to some targets one these are Target values of your of your of your net in supervised learning this uh you know this these Target values are just constants right they are just like imagine a regression problem you're just going to regress to some constant values let's say in this case it's just a reward for a given problem but for Q learning or for offline ARL these values are actually computed with a prvious copy of your own Q function that you are that you're training out there on the left hand side of that uh or which is present in that objective out there so in some ways your target value depends also on the Q function that you are training and the Q function that you're training affects the subsequent Target values creating a cyclic Loop that's going to that's going to make things very different from the learning dynamics of supervised learning where you were trying to regress to fix targets because of this difference actually um if you write down or if you calculate the the this regularizer that I was talking about it's it ends up being very different for Q learning or offline a and and just standard supervis learning so for standard supervised learning one example uh you know under certain modeling assumptions you can show that the regularizer will end up preferring Solutions with a low n with a low feature n so your regularizer is is a function that that penalizes the the norm of the features 5 Fe from your net uh this is actually good because if you think of things like weight dek um you know they are very similar in in motivation this is kind of like weight Decay but not on weights but on features instead on the other hand for offline a now you have a much more complicated regularizer um and in particular there are like two terms which we'll go over just in a bit but this regularizer is not the same as supervised learning and the reason is because of this regression to Target PRS so the first term of this RL regularizer is actually the same it's a term that tries to penalize the Norms of the features that you're training so that's a good term in some ways because you know you're trying to learn low Norm features you're not trying to overfit in weird ways to your uh to your data but this second term now um it's a term that actually um if you if you look at it a bit more carefully it is it looks like the dot product between five Theta at XI which is your state at the current step and 5 Theta at XI + 1 which is your States at the next step so it's something that kind of uh combines together features that a given State action X comma a with X prime or the next state x i + one in this case and if you think about it while the derivation is not here if you think about it this is precisely why uh this is precisely something that you would get from a Target value like thing because the target value has x i + one qued under your Q function whereas the the Q Theta XA up there in the objective is qued on your current state and current F so that's sort of the intution for why this term comes from but now if you look at what this term does it is actually a term that looks like a DOT product between two vectors dot product between 5et of XI Ai and 5 thet of XI + 1 AI + 1 and you have a negative sign in front of this um remember that if you if you recall the equation from before um here you were trying to minimize the regularizer when you WR when you run SGD on that loss so equivalently here you will be trying to minimize the RL regularizer minimizing that regularizer with what does that mean for the second term it basically means I'm going to maximize the second term because there's a minus sign front uh what does that mean that basically means I'm going to maximize the dotproduct of features at XI Ai and XI plus 1 ai+ 1 which means I'm going to actually try to increase the lengths of these vectors because an easy way to maximize dot product of two vectors is to increase their length if their length are not bounded so in some ways this second term um has an effect of conflicting with the first term where the first term is trying to reduce magnitude of features the second ter is trying to increase the magnitude of those features um in this case you know because XI 5 of XI appear in both the both the terms in this case so there's some clear difference that happens um with this R regularizer in comparison with the supervised learning regularizer which you see up there uh any questions here so I think I did a bunch of derivation orally so any questions yes is this just like what the implicit regularization approximately equals just like actually grounded in that's a good question so you should check out the paper uh it's actually grounded in uh so it's it's so remember that here we're dealing with stochastic in the sense there's some noise implicitly so you can't write down the exact precise reg but you can say that the overall regularization effect is in a band around the solution that you would get if you were to minimize that that band depends on the the the you know the the volume of that band depends on the learning rate and other things that that are dictated by the noise in thank you yeah okay uh great so so the the tldr of this is basically you know the rizer has two terms where one term now starts to conflict with the learning that you would have gotten with supervised learning okay so if if this is what is happening um you know and and we have clear clear reason to see that okay the supervis learning regularizer actually is not that harmful because super learning methods clearly do scale whereas in R we find that these methods or at least our empirical result show that these methods did not scale so what can we do about it so we took a very empirical approach here and this is not so this part is actually not something that I can provably show but the empirical approach was very simple it said well if this second term is the only other term that exists in the case of RL um and not in supervised learning and somehow this is correlated with the fact that RL does not scale very well at least so far and supervised learning does then what if we simply try to undo this term what if we simply try to remove this term by adding an explicit loss to my training algorithm that undoes this second term so if I were to just simply add this term back such that my regularization the net regularization looks the same in supervised learning and reinforcement learning we did exactly that so we what we did was we simply took a q learning algorithm in this case any offline algorithm and added the second term as a regularizer to my training method in this case turns out that just doing that um on this particular example that we talking about it helped us improve the scalability of our so you know while these methods while these offline methods initially were you know were reducing in performance as you scaled up model capacity now if you added this regularizer to the training um and ran the same offline algorithm you can actually benefit from uh model capacity increase so this is performance increase in mod in performance sorry percentage increase in performance by scaling up model capacity and you can clearly see it kind of now looks very similar to how the performance of supervised learning scales in this days and not just that but uh if you look at um sort of absolute performance of different methods um so these are methods which include imitation learning you know the best prior method based on decision Transformers which is some sort of like filtered or fancy imitation um as well as sort of just like literally replaying trajectories from the data so like the average performance of the behavior policy compared to all of this we found that combining this idea of regularization with conservative Q learning which is an offline approach we based off of it helped us get get much better performance on this particular Benchmark in fact this was sort of the first offline R based approach which is based on Q learning to surpass the performance of the behavior policy using a large enough model on this particular task so sort of the high level intuition here is like even such simple simple looking tasks like Atari games right which are so standard for RL even they can be very hard when you think about scaling if you just combine them together and and it was very surprising to us that no existing method was able to train large enough models even with all the best tweaks that people have discovered on this domain in the single game setting um um with with offline AR but with this justce regularizer and the same exact other setups we were able to get much better performance with large enough models in this setting okay any questions here yes mod that's a good question um so we did try some small models um I think I would say that um um it it so it depends on like how small you're going with so if you take one single game and you take a regular Network I think it does help but then if you take many games and a regular Network it doesn't quite help so like a regular Network meaning a small enough model basically so I think it's a little hard to say exactly uh but uh and and the reason is because it's hard to quantify precisely when a model capacity is bigger than the inic capacity of the data um so I think this was much more reliably helping when you have a large enough model but it's hard to quantify or if I were to like do the counterfactual I think it's hard for me to draw the line between where the model capacity exceeded the data or not so that's the thing yeah that's a good question so I I don't think we ever found it to be harmful to Performance um there's also so I did not cover one of the details here which is well if you're adding back this term you probably have a waiting term in that hyper parameter it's a regularizer right um You can also do it in a you can also implement it in a way where you use it as a normalization layer in your model uh which is something that I did not cover here I'm having to discuss later uh in that case you don't have a hyper parameter and it won't help uh it won't hurt your your performance so it's going to be like pretty neutral if like nothing happens or it'll only improve if things work better okay okay sounds good so okay let's move on so the takeaways of the section are that you know large models if you just instantiate offline RL niely they can actually hurt but then you know these simple sort of regularization schemes or equivalent schemes that undo these bad parts of in this case implicit regularization or more generally I think like anything that explains the difference between supervised learning and offline R style objectives undoing bad parts of you know resulting terms from such an analysis can already help us get methods that can scale up well U pretty well in this cas um okay so that's the end for this part um let's go on to the next part now which is about um so so far we talked about scaling right so we were in a setting where we were thinking about Atari games we still had all the data that kind of looks like Transitions and actions and rewards everything was given to us but now we want to talk about um a different question of what can you do when you have arbitrary data there so if you if you're if you're learning algorithm does not just see dat is organized in the form of transitions but something else and for this I'm going to talk about one special case of using human videos uh for robots so how can you pre-train robots with human videos so before I go into um you know human videos uh let's look at what does an offline RL based pre-training or you know pre-training scheme for robots how how how could it even look like so if you only had let's say robot data what would you do with offline RL for free training prams well a simple way would be to take a bunch of robot data um you can take data which is already collected in in various Labs you can um and the this data could just come from Human tell operators collecting uh you know collecting roll outs for you you can take this data uh you can simply uh annotate the last state of every trajectory or every roll out as having a plus one reward because remember these are demonstrations a human actually showed how to solve a task and you can simply now take this data and run offline AR on it so this is just giant multitask data you can throw it into a replay buffer um do task ID conditioning so you can Define for each particular transition what task it comes from as a one hot vector and run off the other once you have this um you know you'll have a general Network potentially General policy such that now when you're given a given some some amount of data for a Target domain for a Target task that you care about so these are probably not tasks that you care about these are task that just exist in your data set when you have some limited amount of data for your task that you care about you can now mix some batches from pre trining and and this new data and simply fine tune uh this this model so this is a very simple offline AEL based recipe for pre-training robot policies take some broad robot data pre-train bya offline AEL on that data and simply keep running the same offline RL algorithm or input you can also do online fine tuning if you want to although that's not quite relevant for this section of this stock you can take this offline method and run that now on your task specific data at test time when you are supposed to utilize this pre-train initialization so this is a very general purpose recipe for pre-training Vi offline AR this only uses robot data so far it has data which shows you actions it has data which shows you rewards it organized in the form of projectors but for this part um what we want to do is we want to take these recipes and extend it to use human videos why do we want to use human videos well if you think about all the data that exists out there um even if you include multitask robot data data from other robots it's much it's very tiny compared to human video data that exists out there on YouTube or in well curated data sets such as ego 4D so it makes sense to utilize human video data because it shows how humans interact with the real world and that would be useful for robot control but the challenge that comes up now is most of the video data that exists out there has no actions in it and and more so humans are not robots right humans have uh or or robots are not not humans humans have five fingers typical robots mostly operate on Parallel geographer so there's a huge difference in capabilities and embodiment that exist out there so what can we do if I if I'm given such data can I still keep running offline r on this data um the answer is not quite clear but in this line of work what we did was we showed that you can still keep running offline R methods for not learning policies at this point but rather for learning useful representations or features so it turns out the very same Q learning value learning sort of algorithms that we t about so far they can still give you useful features or representations for now doing robot control if you are willing to pre-train on videos and then keep running the same scheme on robot data so what does this uh look like so here's a you know a general pipeline that you can follow you can first start with all video data that exists out there train some kind of value function on the data and I'll talk about what value functions can be PR this value function will give you a useful encoder of uh of images or of frames of video into representations so a useful encoder that compresses information and images into useful vectors you can now take this uh take take this encoder and use it to initialize an off the shelf offline R algorithm for preing robot so you can take this you can you can have q values and policies on top of this encoder that such that the policy that you obtain Now by training on robot data can then be find you on your desired task so remember that pre-training recipe I showed you train on Broad robot and then Find You On Target data you can just add a new phas before it which now trains this visual encoded part of your network with videos and the the last function here is just simply training a value function which does not require actions because a value function is just a function of your state or your observation and not the action it's not a q function in this case okay so um now the question is well why should this give you good features right why should this give you useful encodings well the answer is like if you train your um if if you train your visual encoder with value functions you are implicitly accounting for the Dynamics of the world the value functions when you train them with bman and backups with bman equations they are accounting for the dynamical structure the sequential structure of the world which is presumably useful because you're training a similar kind of an object a q function or a policy in the second and the third place so that's why we would hope that training with value functions will give you a useful representation useful feature for Downstream policy even though you don't have any actions out there on video data okay uh so before I go into how can you train these value functions I want to just pause and see if there's any question yes yeah that's a that's a good point I'm going to cover that just in a second yeah okay so um yeah so now uh the point is like well okay you know I talked about all this intuitions for Value learning but like what can you do what kind of value functions can you even train on video right a simple value function technically looks like a function of the state which tells you some cumulative reward that you would get by executing a given policy in that case Pi uh for a given reward function because the value function sums up the cumulative reward for a given reward function now this in this case I can choose to Define some kind of a reward function potentially I can just say well my reward function is whenever the human hand is close to a particular object some some specific function like that but that's not quite good here because it will still give me policies or it will still give me value functions that are very specific to the particular reward function I Define I can chose to Define arbitrary reward functions here but they are going to be very specific to one particular capability whereas in the downstream my robot would be required to do many other many other tasks that are not quite covered by the robot so somehow I want to go away from this particular uh way of defining values and I want to not have a reward function out at all so for doing that what I can do is I can Instead try to take a general formulation of value functions such as gold reaching value I can say well I want to train a value function for for on on video data that tells me uh What's the total value I would get if I were to execute a given policy by for reaching arbitrary uh frames or reaching arbitrary goals in the video as well this is a very general reward function because it's a goal-reaching reward function I can Define goals as any arbitrary frame in the video that have seen even across frames across not appearing in the same video clip at all um but that also ends up being quite specific and the reason why this is quite specific is because it only considers one policy I'm only training the value function for one policy which means I'm only learning features that can represent values for one particular policy Pi even though it's for multiple reward functions in this case so well I also don't want to do this because this is too specific so the obvious next step is well I can say okay what if I train value functions these goal reaching value functions that can work well for all policies so I want to train a value function that can I want to train this network that can represent value functions for all possible policies by and for all possible goals uh in this in the space so for all possible goal reaching reward functions this ends up being overly Broad and the reason is because when you think about robots they're not going to operate and they're not going to run any arbitrary sequence of random actions out there all they're going to do is they're going to run some very specific kinds of actions that are going to accomplish tasks out there not arbitrary random actions which is what this particular formulation will still consider because it's training value functions for all possible policies so what we ended up doing in this case was a slightly different formulation where we now model a value function that um that only models the value the total reward that you would get for reaching a particular go g um when your policies are optimal policies optimal policies for some sub set of tasks that you care about so imagine if you only had optimal goal reaching policies meaning given a goal these policies could reach that goal as efficiently as possible you're only modeling value functions for those particular policies for all goal reaching reward functions so in this case the choice of reward function is every goal- reaching reward function and the choice of policies are only optimal policies which are useful in some way rather than just arbitrary random policy so this strikes the right balance or at least empirically this strikes a good balance between breadth and depth of bth and specificity of of the value function that we're training um I would encourage you to check out some more works so this is actually inspired from this work uh the first one here uh which actually models this as an intent conditioned value function uh but you should check that out for more detail and there's many other related works as well out there but the high level idea is you want to choose a family of reward functions and a family of policies for which you model values such that you can get good uh features that are useful for a breadth of Downstream tasks but then they're not too broad that they're just arbitrarily wide and and not super useful does that answer your question yeah okay so um so what we did was we simply used this value function um you know we we put that into this pipeline so your value function now is this value function for the OD gold reaching tasks for certain optimal policies you get your visual encoder throw it into an offline RL pre-training steam so now you initialize your offline method with this visual encoder you got trainer Q function and policy in this case it was conservative Q learning um and now you can now you're ready to take that initialization find you on your target task on which you will measure performance okay so um yeah so so before I go into some robot videos and and quantitative evaluations it turns out that this actually this theme actually gives you useful features so if you were to plot uh if you were to take trajectories in your data set and trajectories which are out of distribution on your data set so if you have a robot data set you can take some trajectories from it and take some trajectories that are not in it and if you plot the values that you learn as a function of the time step a particular State appears in the trajectory so think of this as essentially the time Step at which a particular State appears in your roll out your in your data in your robot data and the y axis is plotting the value function for that state or the Q function for that state you would in this case we find the Q values are much better so how do you read this plot in this case um the Q values the the ground FR Q values should actually increase as you go further into the so they should always have an increasing Trend monotonically increasing as you increase the x-axis value the time step and in this case what we found was you know our approach with videos which is the first column so the vptr it ended up learning these value functions which are very consistent with this monotonically increase in Trend even on OD data or outo distribution data there are some spikes here and there but it's much better than you know not training on video data at all which is this place and other methods for training on video data which is this other so here you see the values have this non monotonic shape as you increase the x-axis value but in this case with r method it's only increasing so it does learn useful features which are good for representing Downstream values on the robot data you can also quantitatively measure this so you can actually measure some kind of mean squared error between the ground truth value function and the values that you learned and again you find that using this sort of an approach with videos um actually works and gives you the smallest mean squared error compared to like the other approach in this case so overall these features are quite useful for learning Downstream value functions and when we actually ran some experiments with it on on real robots when we took this policies find you on a given Target task you can actually find that these policiy generalized pretty well to um you know generalized pretty well in terms of object and gper variability well I think I need to play this videos oh okay I think I need to probably play this from here um yeah so I mean the policies end up performing much better much more robustly than prior ways of uh okay I think I um okay yeah so um yeah so these other approaches here um you know the first three are basically different ways of training on video PDR is just not using video just using robot data and this is our approach which uses videos and robots and you can see that these are this approach is in general much more smooth than robust um compared to sort of these other approaches that exist for either training on video or not using video data at all um and the same thing is true with distractor objects when you have other distractor objects in the scene uh the same thing is true in that case as well um quantitatively you can we did a a comprehensive evaluation and we found that actually using uh videos with our approach with training these value functions is actually better than either not using videos which is this column or doing other ways of using videos which are all of these different methods from PRI um you know these different methods use like self supervised learning on videos so you train like a um you know A M Auto encoder to get some useful representation by just simply you know compressing and trying to reconstruct back the images in your video frames or you could do some sort of contrastive learning and all of those actually perform much worse than than what does um okay any questions here great uh okay so I think I have 9 minutes um let's see how much we can cover of cover in this third section which is about fine tuning utilizing these pre-trained initializations from offline a um this is going to be a little bit more algorithmic so I'm happy to sort of just talk about things at high level and then I I can take questions after the after 6 because I think we need to leave the room at 6 but uh but this part is based on this paper called cql which is going to be present which is going to be presented in Europe's this some yeah next month okay so the setting here that we care about is online Improvement of offline RL policy so if I gave you some data you can run offline RL on it this is giant pre-train data so you can get a pre-rain policy initialization now I want to improve this policy via limited amounts of online interaction with the given task of Interest so I want to specialize this policy on that task with limited amounts of of online actively collected interaction so in this setting if you were to take a first stab um and say well a simple approach to tackling this setting is to Simply take the same offline R method that was running and continue running it with new online or on policy data coming into the replay buffer if I were to do this you would find that most classes of methods out there will show two different kinds of Trends in their results um so just to understand what this plot means I I explain some notation so I'm plotting the on the x-axis I'm plotting the number of environment samples collected during online finding so the offline pre training is done already and now you're simply collecting new online data to improve your policy or to refine your policy the y axis is plotting the performance in this case it's score or return they are both the same and the and the performance zero starts off quite High because you've already done offline fre training and your policy gets some non success so your policy gets about 50% success and I'm plotting two algorithms iql implicit Q learning cql conservative Q learning I think both of both of these algorithms have been covered here and these algorithms are simply being run uh with uh as you start collecting online dat online data is put into the replay buffer of the offline algorithm so it's app to the data set that TR so what you can see clearly is that there's two sort of uh things happening here first with cql or conservative Q learning you see there's a big dip in performance right when find so your find your model initially was getting 0.5 roughly um it ended up suffering a big dip and then recovers back to get an overall Improvement eventually uh whereas with iql uh you find that the Improvement is is consistent but it's much more slow it's much more so of it does not have a high slope it's like a lower slope than than cql eventually so um clearly something seems kind of not not quite right here because ideally what I would expect is a curve that sort of surpasses both of these curves improves quite quickly reaches the Optimal Performance of one as quickly as possible but then um it's also not very slow and does not have any so with this observation we said well something was wrong here and we wanted to kind of dig deeper into understanding why this Behavior happens for these different methods and what can we do about this to build a better funing algori so uh specifically when you restrict yourself to the case of cql um what we did was we tried to understand this unlearning phenomena we try to understand why this performance happens right at the and for that we plotted a bunch of metrics um which includes many many many metrics not all them are parted here but one metric that we found quite correlative with performance dip was the average Q Val so if you take your data set which includes the offline and online data now you plot the average Q value of the Lear q q Network on that data you find that this Q value has the sharp correction phase where it changes in magnitude precisely at the same time when the return has this dip and the return this one so this was a sort of an initial signal that something about this unlearning is related to shifts and magnitudes of these two values that you see um this is precise this is pretty much like an adjustment and scale so you your values are40 and now your values become minus 10 as you train for the so what we did is we tried to understand now why this happens and I want to First give you some intuition and this is a bit imprecise but it does the job of the intuition and then I'll show you some some concrete evidence of the so if you think about what cql does you know I want to explain why this why these Q values are super low and why they adjust in spale so first question why are these values super low if you look at what cql does it basically trains your Q function to fit the target Q values which is this temporal difference error term bman error and it pushes down the high value High Q values on outter distribution the TD term in this case is is a relative loss all it says is your Q values must fit the target Q values which are coming from a previous snapshot of the same function whereas this and and this particular expectation is computed only over your data set it's not computed over all possible State action pairs such that minimizing that loss will give you the uniquely optimal function it's only computed over a limited set of samples so if you were to plot let's say the Q value at a particular State as a function of actions assume 1D actions for now and assume that this is the ground TR Q function you'll end up seeing that many different Q functions many different learned Q functions shown in red can attain the same PDL they'll all attain a similar relative loss because you know to their own Target values they have a similar difference but cql in this case because of this absolute term which minimizes the Q value magnitude is not going to find any of this it's going to find the smallest Q value possible the Q function with the smallest Q value possible which attains the same gen and that's because this regularizer that you're adding is an absolute loss it just simply minimizes so sort of the high level takeaway here is the reason why these Q vales are super low is because of many solutions which can all be you know of a similar shape meaning the Q values across actions look similar um cql or any pessimistic algorithm is going to find the one with the smallest absolute value in this case so it's the smallest one when you have this smallest possible Q value now and you let's say do online data collection so you're going to let's say query a new action and run this uh in the real world you're going to now when you collect data you're going to see the actual rewards you for this particular action and when you update your Q function on this data what will happen is you'll end up introducing erroneous Peaks which look super high under your Q function and this is because this when you execute this action during expiration you see it actual ground TOS reward value that you hadn't seen in your offline data set because your offline data set did not show you any of that actually so with that Q function in mind now um you know your bad action supposedly appears very good under your Q function such that now if you were to run your policy optimization find the peak in this Q function you're going to find a wrong action a bad action which is actually not super good um and degrades your policy performance so earlier the policy was finding the peak in this red curve which is the peak in this blue curve which is the ground through Q function now your policy is going to find the peak in this red curve which is not the peak in the blue curve which is the ground so clear some kind of unlearning happened you were already pretty good now you're not going to be as good this case because you see an [Music] err I think I'm super short on time okay one more minute yeah okay any question any question here by the way um okay uh great so yeah so basically what we did our method was very simple it said well you know what if we can somehow prevent such Peaks from occurring by simply making sure that my training procedure my offline training procedure never find such a low Q value function so never find such a function which attains the smallest Q value as I saw here like if I were to find a function that looks like this particular curve like this first red curve then none of this will happen because my Q vales are not that small such seeing reward value here will update my function so that's precisely what we did we said well um rather than training with with cql we can train with um something that prevents the Q values from being so low and the way we did this was to say that we we took the same cql algorithm and imposed a constraint that said that the Q values that you're learning should never be lower than some reference Q function that you could specify exactly you just constraint you lower B your values to always be above that Orange Line in that case in which case now none of this issue happens because even when you update your Q values on those actions you're going to have a small little dip and and and coming back up there but that's still going to be lower than the the mode the the highest point in this Q function that you train which is shown in red here um so the bad action does not appear any more optimal um as it was in the earlier part and turns out that a very simple choice for this Q function this reference Q function is simply the Q function of the behavior policy so Q function of your data set or data generating policy that you can compute with relative ease without the need for any Q learning or or all together at all simply compute the return to go estimates and use them to lower bound your Q function in that picture and that works pretty well um okay I think I just had results here um yeah so the of the results is that this you know kind of alleviates this tisue so now you get curves here and then that improve over the course of training that don't have this dip as much um and are better and faster than other methods including these methods such as iql that I described the first time um yeah and then you can utilize more gradient steps here um I guess I'm going to skip the details but that will further improve efficiency uh and in fact uh this gives you the smallest regret the smallest cumulative regret over the course of training which is a standard measure for measuring sample efficiency of different algorithms and if you're also interested you should check out some other recent work that came out from from Stanford that takes this approach and uh uses this for in conjunction with VMS and and so on for uh for um you know actually doing this on a real robot we also did something on the real robot where we took this microwave door open task and U you know we tried to online find unit so the offline policies kind of okay it reaches the gripper but then is unable to open it and if you now do things with with gal in this case uh you can slowly get behavior that you know now reli touches the door and then eventually sort of opens the door fully uh in 20K steps so this is fully online the policy is running online collecting its own data and improving over it but you start from the offline initialization so here unlearning matters a lot because if you unlearn then then you're you're within 20K steps you'll not be able to recover and get back a good solution but now you can you there's no unlearning because you're constantly improving that kind of shows the the point that this method was designed to design to address yeah uh okay so that's sort of the the end so you know the challenge in fine tuning typically stems from slow policy Improvement or this initial unlearning um you know a simple way to address this and to get a curve that does not look like slow or unlearned but rather looks like this blue curve is to calibrate the scale of these Q values by simply saying they should not be as as low as you know what you can get from cql um and that kind of gets you best of all sort of an algorithm um like this blue curve here yeah that's basically it um I think I was a bit fast but but happy to take any questions um after this yeah doesn't seem like we're getting picked out that means we have some time for questions yes the part ofure saying the give some of implicitation does that apply in general for stochastic optimizers and similarly are there any working with specific optimizers for RL that would perhaps better CL regiz that's a great question um so I'm not entirely sure about so it applies in general to stochastic optimizers yes I personally am not very sure about exact work on stochastic optimizers for RL but I think that could be a could be a great great direction to work on yeah um I think the bigger question that I would have personally there would be um first of all I guess you need a character ization of what exact terms matter um and I think ours was kind of one of the very few first Works probably which characterized something like that in any setting for RL it was still in a simpler setting there were some assumptions made um but I think you need that to be able to then derive the right Optimizer but yeah I think there's some work now on characterizing impit regularizers so you can start off by building new optimizers definitely yeah any other questions yes is it just a coincidence that the two parts of the implicit um regularization are weed the same that's a good question so um yeah I think I was probably a bit emphasized there so there's a gamma discounting there which probably makes sense now I think so um yeah so you know there's a I didn't put it up here because I did not introduce gamma notation at all so there's a gamma in there but like basically what up happening is um you know if you had a gam here any other questions let give Round of Applause and see you all for our final guest s this Wednesday when D City from Sanford will be speaking about Interactive Learning |
CS_285_Deep_RL_2023 | CS_285_Lecture_6_Part_3.txt | all right next we're going to talk about some design decisions for actually implementing actor critic algorithms so we'll start with the discussion of neural network architectures in order to actually instantiate these algorithms as dprl algorithms we have to pick how we're going to represent the value function and the policy so before in the last lecture we just had the policy to deal with now we have to represent both of these objects and there are a couple of choices we could make so one very reasonable starting choice and this is the one that i would recommend if you're just getting started is to have two completely separate networks so you have one network that maps a state to the value and then you have another really separate network that maps that same state to the distribution over actions and these networks have nothing in common this is a convenient choice because it's relatively simple to implement and it tends to be fairly stable to train the downside is it may be regarded as somewhat inefficient because there's no sharing of features between the actor and critic this could be a more important issue if for example you are learning directly from images and both these networks are convolutional neural nets maybe you would really want them to share their internal representations so that for example if the value function figures out good representations first the policy could benefit from them in that case you might opt for a shared network design where you have one trunk maybe this represents the convolutional layers and then you have separate heads one for the value and one for the policy action distribution this shared network design is a little bit harder to train it can be a little bit more unstable because those shared layers are getting hit with very different gradients the gradients from the value regression and the gradients from the policy gradient they'll be on different scales they'll have different statistics and therefore it might require more hyper parameter tuning in order to stabilize this approach but it can in principle be more efficient because you have these shared representations now there is another important point that we have to discuss before we get an actual practical deep reinforcement learning actor critic method and that's the question of batch sizes so as described here this algorithm is fully online meaning that it learns one sample at a time so it takes an action gets a transition updates the value function on that transition and then updates the policy on that transition and both updates use just one sample now we know from the basics of deep learning that updating deep neural nets with stochastic gradient descent using just one sample is not going to work very well so those updates are going to have a little too much variance so these updates will all work best if we have a batch and one of the ways that we could get a batch is by using parallel workers so here's the idea this is the most basic kind of paralyzed dr critic as a synchronized parallactor critic instead of having just one data collection thread instead of just running one simulator you might run multiple simulators and each of them will choose an action in step one and generate a transition but they're going to use different random seeds so they'll do things that are a little bit different and then you will update in step two and step four using data from all of the threads together so the update is synchronous meaning that you take one step in step one for each of the threads then collect all the data into your batch and use it to update the value function and then use it to update the policy synchronously and then you repeat this process so this will give you a batch size equal to the number of worker threads it can be a little bit expensive right because if you want a batch size of like 32 then you need 32 worker threads but it does work decently well now it can be made even faster if we make it into asynchronous parallel electrocritic meaning that we basically drop the synchronization point so now we have these different threads that are all running at their own speed and when it comes time to update what we're going to do is we're going to pull in the latest parameters and we're going to make an update for that thread but uh we will not actually uh synchronize all the threats together so just as soon as we accumulate some number of transitions let's say we get 32 transitions from all the workers we'll make an update now the problem with this approach of course is that the actual transitions might not have been collected by exactly the same parameters so if one of the threads is lagging behind maybe its transition was generated by an older actor and then you will basically not actually update until you get transitions from faster threads and those will be using a newer actor so in general all of the transitions that you're pulling together into your batch here may have been generated with slightly different actors now they're not going to be too different because these threads aren't going to be running at such egregiously different rates but there will be a little a little bit lagging behind so an obvious question to ask here is well is this kind of update the asynchronous update mathematically equivalent uh to the standard synchronous update and the answer is that it isn't that you have the small amount of lag which is similar to what you get with asynchronous sgd but in practice it usually turns out that making the method asynchronous uh leads to gains and performance that outweigh the bias incurred from using slightly older actors the crucial thing here is slightly older right because the actors are not going to be too old if they're too old then of course this won't work but as long as none of the threads hang up then you'll be okay um but this might get us thinking about another question well in the asynchronous actor critic algorithm the whole point was that we could use transitions that were generated by slightly older actors if we can somehow get away with using transitions that were generated by much older actors then maybe we don't even need multiple threads maybe we could use older transitions from the same actor basically maybe we could use a history and load in transitions from that history and not even bother with multiple threats and that's the principle behind off policy actor critic so the design off policy after critic is that now you're going to have one thread and you'll update with that one thread but when you update you're going to use a replay buffer of all transitions that you've seen and you will actually load your batch from that replay buffer so you're actually not going to necessarily use the latest transition you'll collect the transition store in the replay buffer and then sample an entire batch from that replay buffer maybe 32 transitions rather than just one and update on that batch now at this point we have to modify the algorithm because doing this naively won't work this batch that we loaded in from the replay buffer definitely came from much older policies so it's not like the asynchronous actor critic before where the transitions came from just slightly older actors and we could just ignore that now it's coming from much older actors and we can't ignore that we have to actually change our algorithm okay so when i say replay buffer basically i just mean a buffer that contains transitions that we saw in prior time steps the most uh straightforward way to implement a replay buffer is to implement it as a ring buffer a first in first out buffer where you batch up let's say one million transitions i will say here that we will discuss replay buffers much much more in a subsequent lecture so don't get too caught up on this for now it's just a buffer that stores all the data all the experience you've seen so far and then of course we're going to form a batch for each of these updates by using previously seen transitions okay so let's see what this might look like in an off policy actor credit algorithm we're going to take an action as usual from our latest policy get the corresponding transition but then instead of using that transition for learning we'll actually store it in our replay buffer then we will sample a batch from that replay buffer so this notation denotes a set of n transitions each of them indexed with i it might not even contain our latest transition so when we load this batch from the buffer it might not contain that latest transition that we sampled and that's okay and then we're going to update our value function using targets for each of these transitions in our batch so we have uh capital n transitions which means we have capital n targets so we're going to compute the gradient of our loss averaged over the bash so n here is the batch size it's not the total buffer size it's just the size of the batch so it might be 32 or 64. then we'll evaluate our advantage again for each of the samples in our batch and then we'll update our gradient our policy gradient by using that batch so now the policy grain is also averaged over n samples and then we'll apply the policy gradient like before so this algorithm is not going to work the way i described it's actually quite broken and we have to do a bunch of things to fix it one thing that i would recommend as an exercise here is to pause the video look at this algorithm and try to guess where it's broken i'll tell you right now it's broken in at least two places meaning that in at least two places in the pseudocode there's something that doesn't make sense try to pause the video and find it and then you can resume and i'll tell you what it is okay so the first problem is that when you load these transitions from the replay buffer remember that the actions in those transitions were taken by older actors so when you use those older actors to get the action and compute the target values that's not going to give you the right target value it'll give you the value of some other actor not your latest actor and that is not what you want so formally the issue is that a i did not come from the latest pi theta it came from some older pi theta and therefore s i prime also was not the result of taking an action with the latest actor and that's a problem the second issue is that for that same reason because ai didn't come from the latest policy pi theta you can't compute the policy gradient this way remember from the previous lecture it is very very important when computing the policy gradient that we actually get actions that were sampled from our policy because this needs to be an expected value under pi theta if that is not the case we need to employ some kind of correction such as important sampling and we could actually do this with important sampling but it turns out that there's actually a better way to do it for off policy after critic which i will tell you about next but first let's talk about fixing the value function so i'll first fix the problem in step three and then i'll fix the problem in step five so to fix the problem of step three instead of working with value functions let's instead think back to lecture four where we also introduced this notion of a q function if the value function tells you the expected reward you will get if you start in state st and then follow the policy pi the q function tells you the reward you'll get if you start in state st then take action a t and then follow the policy pi now notice here that there's no assumption that the action a t actually came from your policy so the q function is a valid function for any action it's just in all subsequence steps you follow pi so what we're going to do to accommodate the fact that our transition s i a i s i prime did not come from our latest policy pi theta is that we will actually not learn v but we will instead learn q so we will not keep track of v hat pi phi we'll keep track of q hat pi phi it's going to be a different neural network it'll take in a state and an action and output a q value but otherwise the principle behind the update is the same so we're going to compute target values and then we will regress onto those target values it's just that now we'll give the action as an input to the q function another way to think about it is we can no longer assume that our action came from our latest policy pi theta so we'll instead learn a state action value function that is valid for any action so that we can train it even using actions that didn't come from pi theta but then query it using actions from pi theta okay now those of you that are paying attention might notice that there's a little bit of an issue here because before i was learning v-hat and i was using v-hat in my targets and that's okay because i'm learning v-hat so i have it available to me to use as my targets but now i'm learning q-hat but i still need v-hat for my target values so where do i get that well remember that the value function can also be expressed as the expected value of the q function where the expectation is taken under your policy so what we can do is we can replace the v in our target value with q evaluated at the action a i prime except that a i prime now is not the action from our replay buffer ai prime is actually the action that your current policy pi theta would have taken if it had found itself in si prime so you'll actually sample s-i-a-i-s-i prime from your replay buffer but then you will sample ai prime but by actually running your latest policy and you can do that because your policy is just a neural network you don't have to actually interact with a simulator to ask the policy what action it would have taken so it's a little trick that we're pulling here we're actually exploiting the fact that we have functional access to our policy so we can ask our policy what it would have done if it had found itself in the in this old state si prime even though that it never actually happened so then we get this action ai prime and we plug it into the q value and that gets a target value that actually represents the value of the latest policy at this old state si prime that's really cool okay so we've resolved our issue with the value function instead of learning v we're going to learn q and we're going to exploit the fact that we can evaluate the value function as just the expected value of the q function under the policy now how are we going to deal with step 5 how are we going to deal with a policy gradient well all we're going to do is we're going to use the same trick but this time we're going to use it for ai instead of ai prime so in order to evaluate the policy gradient we need to figure out an action sampled from the latest policy pi theta at the state s i but of course we can do that we can just ask our policy what it would have done at the state si if it had had the option to act there and we'll call this action ai pi to differentiate it from ai so ai was actually from the buffer ai prime is what the policy would have done if it had been in the buffer state si and now we'll just plug in this ai pi into our policy gradient equation and that's now correct because ai prime did in fact come from pi theta so this is in fact an unbiased estimator of expectations under pi theta so remember ai pi here is not the action from the replay buffer it's the action sampled from your policy at the state from the replay buffer now in practice when we do this kind of off policy actor critic we don't actually use the advantage values we just plug in our q hat directly into this equation we don't have to do it we could actually calculate advantages there's nobody stopping us from doing that but it turns out that it's very convenient to just plug in q values they have higher variance because they're not being baselined but higher variance is actually okay here why is that well it's because we don't need to interact with a simulator to sample these actions ai prime so it's actually very easy to lower our variance just by generating more samples of the actions without actually generating more sampled states so it doesn't require any simulation just requires running the network a few more times so in practice we're actually okay with a higher variance here because in exchange we get a larger batch size and it's all good and it spares us the complexity of computing the advantage of step four so we're actually going to completely drop step four for off policy actor critic algorithms and we'll use q-hat instead of a-hat which is still unbiased it just doesn't have the baseline so that gives us the more or less complete algorithm for off policy actor critic what else is left well there is still a little bit of an issue because s i the state that we're actually using itself it didn't come from the state margin of the latest policy it came from the state margin of an old policy unfortunately there's basically nothing we can do here so this is going to be a source of bias in this procedure and we'll just have to accept it the intuition for why it's not so bad is because ultimately we want the optimal policy on p theta of s but we get the optimal policy on a broader distribution so our replay buffer will contain samples from the latest policy as well as many samples from other older policies so that the distribution is sort of broader than the one we want so we're not going to miss out on the states from our latest policy we just also have to be good on a bunch of other states which we might never visit so we're doing kind of extra work but we're not missing out on important stuff and that's the intuition for why this basically tends to work okay so a few details here if you actually read some papers and i'll reference a paper here on the shortly that implement this procedure one of the things you'll notice is that oftentimes there's much fancier things we can do for step four for example one thing we could use is something called the reparameterization trick which i'll discuss in the second half of the course much later so don't worry about it for now but that can be a better way to estimate this integral there are also many fancier ways to fit the q function and we'll discuss this in the next two lectures when we talk about q learning so i described a very naive way to fit the q function but there are actually better ways to do it if you want an example of a practical algorithm that builds on this idea check out the algorithm called soft actor critic this is actually one of the most widely used actor credit methods today so although the online value based doctor critic methods are are more classical the off policy q value-based ectocritic methods are more commonly used and we'll also learn about algorithms that do this kind of thing with deterministic policies later so this is for a stochastic actor later on when we talk about q-learning we'll actually revisit off-policy ecto-critic methods also with deterministic actors |
CS_285_Deep_RL_2023 | CS_285_Lecture_18_Variational_Inference_Part_1.txt | all right welcome to lecture 18 of cs285 in today's lecture we're going to do something a little different than usual instead of covering any new reinforcement learning algorithms we're actually going to talk about variational inference and generative models this is a little bit of a break from our usual material because we won't cover any reinforcement learning algorithms today but i wanted to have an entire lecture on variational inference in this class because the concepts of variational inference come up again and again in a variety of reinforcement learning topics including model based reinforcement learning inverse reinforcement learning exploration and others and more generally variational inference has a very deep connection to reinforcement learning and learning based control and we'll learn about this next week so in today's lecture we're going to talk about probabilistic latent variable models what they are and what they're for we'll talk about how variational inference can allow us to tractably approximate training of probabilistic latent variable models and then we'll talk about something called amortized variational inference which is a very useful and powerful tool to utilize variational inference together with function approximators like deep neural networks and then we'll conclude with a discussion of some example models that we could train with amortized vi including variational auto encoders and various sequence level models that are useful in model based rl so the goals for today's lecture will be to understand latent variable models in deep learning and understand how to use amortized variational inference in order to train them all right so let's start with a very basic kind of overview those of you that are already familiar with this material may want to skip ahead but i wanted to make sure to start at the very beginning to make sure that everyone is kind of at the same level in terms of notation terminology and so forth so what is a probabilistic model a probabilistic model is a very general term for a model that represents a probability distribution so if you have some random variable x then p of x can be represented by probabilistic model take a moment to think about the kind of probabilistic models that we've already encountered in this course what are some examples that we've already seen so if you just have a random variable x and you want to model p of x maybe you have some data so those orange dots represent x's that you've actually observed modeling p of x means fitting some distribution to them for instance you might fit a multivariate normal distribution to try to represent this data now probabilistic models could also be conditional models so for instance you could have a model p of y given x in this case maybe you don't care about modeling the distribution over x but you care about modeling the conditional distribution over y given x so if you have some inputs x on the x axis and some outputs y on the y-axis you might fit a conditional gaussian model a model that represents y as in this case a linear function of x with some additive gaussian noise now we've definitely seen models like this before take a moment to think back to when in this class we've seen conditional probabilistic models so one example of this of course is policies policies are conditional probabilistic models they give us a conditional distribution over a given s all right so now the main topic of today's lecture is actually going to be something called latent variable models latent variable models are a particular type of probabilistic model formally a latent variable model is just a model where there are some variables other than the variables that are the evidence or the query so in p of x there is no evidence and the query is x in p of y given x the evidence is x and the query is y if you have a latent variable model that means that there are some other variables in the model that are neither the evidence nor the query and therefore need to get integrated out in order to evaluate the probability that you want a very classic example of a latent variable model that people use to represent p of x is a mixture model so in this picture we have data uh that is organized to three very clear clusters now a priori we're not told what these clusters are so the the the clusters here are color coded but the data is not actually color coded the data is just a collection of points and you want to represent a distribution that accurately fits that data now here it turns out to be very convenient to represent this distribution with a mixture model consisting of three multivariate normals this is a type of latent variable model take a moment to think about what the latent variable here is so in this case the latent variable is actually a categorical discrete variable that can take on one of three values corresponding to the three cluster identities and we can represent the slate and variable model as a sum over all the possible values of the latent variable of the conditional distribution over the variable that we're modeling which is x given the latent variable z times the probability of that c so here p of x is given by sum over z of p of x given z times p of z z is a categorical variable that takes on one of three values corresponding to the identity of the cluster and x is a two dimensional continuous variable corresponding to the actual location of the point we could do the same exact thing for conditional models we could say that p of y given x is given by a sum over our latent variable z of p of y given x comma z times p of z now there are other ways to create this decomposition you could for example say that p of z also depends on x so you could have p of y given x comma z times p of c given x you could even have the conditional or y not depend on x so you could have p of y given z times p of z given x those are all valid decompositions and that's a design choice that we make if we want to stick with discrete categorical variable z one example of a model like this that we've actually already seen before is the mixture density network which is the model that we discussed when we talked about imitation learning and how we might want to do multimodal imitation learning in order to deal with multimodal situations like driving around the tree so back in the second lecture of the course we learned about how we could have neural networks that output distributions represented by mixtures of gaussians so the neural network outputs multiple music sigmas one for each extra element and multiple w's okay let's say the input to the network is x and the output is y and the latent variable again is the identity of the cluster take a moment to think about what the probabilistic model corresponding to this picture on the right side of the slide actually is so it's representing p of y given x as the sum over z of p of y given something times p of z given something what is the sum well in this case our neural network is actually outputting the means and covariances of the gaussians and is also outputting the w's the the probabilities of each mixture element so in fact this model is given by sum over z of p of y given x comma z times p of z given x so it's actually a little different than the equation i've written here here in the in the picture right there the p of z actually depends on x so it's a design choice that we make all right so in general if you have a latent variable model you could think about it like this you have some complicated distribution over x represented by this picture so p of x is some complicated thing you have some prior over z p of z typically we would choose this prior to be a simpler distribution maybe it's a maybe z is categorical so p of z is a discrete distribution or maybe z is continuous but p of z is some very simple class of distributions like a gaussian distribution and then we might represent the mapping from z to x the p of x given z as some simple conditional distribution so p of x given z maybe could be a neural network where the mean is given by a neural net function of z and the variance is given by a neural net function of c now those functions might be very complicated but the actual distribution x given z could be very simple it could be for example a gaussian a normal distribution so p of z is a simple distribution p of x given z is also a simple distribution the parameters of that simple distribution might be complicated but the distribution is simple for example uh something that can be parameterized explicitly this is a very important point to understand especially what i mean by the word simple here p of x is not simple because it is very hard to find a a single parametrization like a gaussian distribution or a beta distribution that perfectly captures p of x p of z is simple because a simple distribution like a gaussian captures it perfectly p of x is given z is also simple because you could represent it with a gaussian distribution although the parameters of that gaussian distribution may be very complex now of course i'm using gaussian distributions as an example here these could be different kinds of distributions different parametrizations they could be discrete or continuous uh so this is just an example but in general p of x would be given by some sum or integral over all possible values of c of p of x given z times p of z so what's going on here is that both p of z and p of x given z are simple but their product when you integrate out z could be some very complex distribution and this is very a very powerful idea because it allows us to represent complicated distributions as products of simple distributions that we can learn and parameterize all right uh so we have an easy two easy distributions multiply them and integrate out z uh the same exact thing can happen in the conditional case so uh in the example i had before conditional latent variable models for multi-modal policies uh you could have a gaussian mixture on the output or more generally you could have some latent variable let's call it z that serves as an additional input into the into the model and you have some prior p of c and you have your conditional p of y given x comma z and uh the same exact logic as on the previous slide would apply so p of z would be simple p of y given x comma z would be simple but the result of integrating out c meaning the resulting distribution p of y given x could be extremely complex another case where these kinds of things come up is model based reinforcement learning so you could have latent variable models in model-based reinforcement learning we already saw an example of this when we talked about model-based rl with images so we saw these examples of latent state models where you observe images o and you want to learn a latent state x that depends on actions u and here we actually have a more complex latent space so we have our observation distribution p of o given x and our prior p of x uh actually models uh the dynamics it actually models p of x t plus one given x t and p of x one so the the prior on x is much more structured and more complex the latent space for these models has structure and we'll revisit this at the end of the lecture so if this part is not entirely clear don't worry about that we'll come back to it all right so now we've learned about what linked variable models are what they're for why we want to have them we'll see latent variable models in other places so next week we'll also talk about how we can use reinforcement learning together with variational inference to actually model human behavior so instead of saying given a reward function what is the optimal way to act you can instead say given date of a person doing something can we sort of reverse engineer what the person is trying to do can we infer their objective function infer something about their thought processes and this is common both in imitation learning domains and also in the study of human behavior in neuroscience and motor control we also see latent variable models and generative models in exploration so when we talked about exploration we actually briefly alluded to this we discussed how we can use variational inference techniques for things like information gain calculations and how we use gyrative models and density models to assign pseudo counts and account based bonuses so these kinds of gyrative models and latent variable models come all the time in the study of reinforcement learning by the way when i use the term generative model just to clarify the terminology a little bit a generative model is a model that generates x so p of x is a generative model a latent variable model is a model that has latent variables not all generative models are latent variable models and not all latent variable models are general models however usually it is much much easier to represent generative models as latent variable models because a generative model typically needs to represent a very complex probability distribution and it is much much easier to represent a complex probability distribution as a product of multiple simple probability distributions so for that reason while generative models do not need to be latent variable models oftentimes it's very convenient when we have a complex determinative model to model it as a latent variable model all right so now let's let's get to the meat of the lecture let's talk about how it is that we can train latent variable models and why this is difficult so let's say we have our model p theta of x so theta here represents the parameters of a model and we have data x1 x2 x3 etc through xn when we want to fit the data what we typically want is a maximum likelihood fit so the most natural general modeling objective is to set theta to be the r max of one over m times the sum over all of your data points of log p theta x i so if you find theta that maximizes the log probability of all of your data points you will have found what is called the maximum likelihood fit which in some sense is kind of the best model you could have for your data and your p of x is given by the integral over z of p of x given z times p of c so if i substitute this equation for p of x into the maximum likelihood fit i get this training objective now of course the first thing that you might notice that this training objective is quite difficult to calculate if z is a continuous variable calculating this integral every time you want to take a gradient step gets to be pretty intractable so we can't really do this directly in some very simple cases we could for example if we have a gaussian extra model we can actually sum over all the mixture elements it turns out that algorithm is still not very good because it ends up having very poor numerical properties so even in cases where we can't estimate this integral we oftentimes don't want to because the resulting optimization landscape is very nasty but with continuous variables we might not even have that choice we might not be able to estimate that integral accurately even if we wanted to all right so how can we estimate the log likelihood and the gradient of that log likelihood in a tractable way that's really the key to training these latent variable models well one alternative is to use an objective called the expected log likelihood i'm going to just state the objective here i'm not going to justify it but when we talk about variational inference later we'll see why this objective is reasonable so for now just kind of take it at face value this is the objective we're going to use but later on we'll see the justification for why this objective is a principal choice so the expected log likelihood intuitively amounts to sort of guessing what the latent variable models are so you could think of the latent variables as basically being partial observations of the data so the data really consists of x's and z's but you observe the x's but not the z's so what you could do is you could essentially guess what the z's were you could say well given that the data point is over here it probably belongs to this cluster and then construct a kind of fake label that says this x i actually has this value of z and then do maximum likelihood on that value of x and z now of course in reality you don't know the z that goes with a particular x i exactly but you might have a distribution over it so instead of just taking the one value of z that is the most likely you would take the entire distribution over z's and average the likelihood weighted by the probability of that z and that's what gives us the expected log likelihood calculation so the um the objective we're going to use is the sum over all of our data points of the expectation over z given x i of the log p theta x i comma z so the intuition is if you guess the most likely z given x i and pretend it's the right one although of course in reality you don't actually guess just one z you actually sum over all disease weighted by their probability of being the right one so there are many possible values of z so you use the distribution p of z given x i all right so first of all why is a subjective more tractable well because expected values can be estimated with samples right so so that expected value if you if you want to get an unbiased estimate of the expected value you don't need to actually consider all z's you can simply sample from the posterior p of z given x i and then average together the log probabilities of those samples you can't do that trick on the previous slide you can't do that trick if you have the log of the integral because the log of a of an integral or sum doesn't decompose linearly but the sum does so you can estimate it with samples so the tractable algorithm for us doing this just like with we saw with policy gradient is just sample from z given x i and average together the log probabilities and you can do of course the same thing for the gradient so if you can get this posterior p of z given x i then you can calculate the expected log likelihood in a tractable way and calculate is great in an attractable way but then the big challenge becomes how do we calculate p of z given x i if we could just calculate that quantity then we could estimate the expected log likelihood so this is going to be the topic for the next part of the lecture so when you want to estimate p of z given x i what you're really saying is given some point in x map it back to a distribution over z which might be some fairly complex distribution and then calculate the expected log likelihood under that distribution all right so that's what we're going to talk about in the next part of the lecture |
CS_285_Deep_RL_2023 | CS_285_Lecture_23_Part_1_Challenges_Open_Problems.txt | all right welcome to the final lecture of Cs 285 today we're going to talk about challenges and open problems so first let's have a brief review of the material that we covered in the course there was a lot of things that we covered and a lot of different methods so I'm going to try to draw a map uh to try to illustrate the different principles and how they relate to one another so at the root of it we have learning based control so basically our was to cover learning based control methods in turd very broadly and learning based control methods include imitation learning methods which are learning from demonstration supervision and reinforcement learning methods which are learning from rewards reinforcement learning methods include classic model-free RL algorithms uh which are things like policy gradients value based methods value based methods and policy gradients combined result in actor critic methods we covered deep Q learning as an example of a specific value based method Q function actor critic methods like sa uh and advanced policy grading methods like trpo and po there's also modelbased control and modelbased control does not have to be learning based so those planning and control methods we discuss like lqr don't by themselves necessarily have anything to do with learning but they can be combined with learning to produce modelbased RL methods and in their purest four modelbased RL methods that do not use a poli that simply train a model and imp plant through that model don't actually make use of all these RL Concepts we discussed in the model free portion but we can of course put them together and use learn models in combination with reinforcement learning algorithms like policy gradients uh or value based methods to get more effective modelbased RL algorithms and then there are a bunch of other Concepts that kind of apply longitudinally across the range of different RL methods that are sort of orthogonal to the particular choice of algorithm like for example the choice of exploration strategy uh use of unsupervised oral objectives like skill Discovery and so on there are also other tools that lie outside of the learning based control framework but that are very useful like for example the tools of probalistic inference and variational inference which give us the controlers inference perspective on RL and that together with imitation learning allow us to derive things like inverse RL methods now this doesn't fully cover every single thing we discussed uh we also discussed things like sequence models pump DPS Etc but this hopefully gives you a rough overview of the particular parts of this course but what I'd like to talk about today are some of the challenges with deep reinforcement learning methods basically the things that are open problems that we have not yet addressed and then also some perspectives about how deep reinforcement learning should be used so let's start with the challenges now some of you might already be familiar with quite a few of the challenges of the DL from having for example done the homework in this course and experienced which things are easy and which things are hard in the homeworks but let's go over them a little bit so some of the challenges in DL are really challenges with the core algorithms uh for example stability does your algorithm actually converge uh can you uh do you have to tune your hyper parameters very very carefully or is the same hyperparameter setting going to work across the board for a variety of different problem types efficiency how long does it take to converge meaning how many samples do you need how many trials also potentially how much compute generalization after your algorithm converges does it actually generalize to new problem settings and what does that actually mean in your domain but there are also some challenges with RL methods that really have to do with the assumptions of RL and these challenges become much more pronounced when we try to apply RL algorithms to real world settings and we find that certain assumptions that RL algorithms make are a little difficult to satisfy for some real world problems so is r actually the right problem formulation perhaps you'd like to solve a learning based control problem but some of the things that RL assumes don't fit very cleanly for example maybe you don't have access to a ground truth reward function what is the source of supervision for your real world problem essentially somewhere somebody needs to supervise the algorithm some forms of supervision have to do with telling the algorithm what you wanted to do like the reward some of it serve to make the learning process easier like for example access to demonstrations some of the things you provide are a combination of both for example providing a more well-shaped reward a reward that is not sparse might serve to both specify what you want and how you want the method to do it so the assumptions often presentent major challenges so let's start with the challenges with core algorithms one big one is stability and Hyper parameter tuning reinforcement learning algorithms in a sense solve a significantly harder problem than supervised learning methods because they have to get their own data they don't have to they don't get to assume the data is IID they have to optimize an objective rather than being given ground truth optimal actions and all of these additional challenges mean that these methods are more sensitive to the par particular setting of parameters parameters like exploration rates learning rates and so on now devising stable RL algorithms stable in the sense that small changes in hyperparameters don't lead to large degradation and performance tends to be very hard and this shows up in different ways for different classes of methods so for example for um q-learning or value based methods part of the problem is that fitted value methods with deep network uh function approximation are generally not contractions and hence in the most General case don't have guarantees of convergence so we have a lot of tricks that we learned about that can make them converge in practice uh but the core theoretical issue is still there and this theoretical issue shows up uh in a number of ways first it means that lots of uh there are lots of parameters for that you need to select carefully for stability like the delay for the Target Network the replay buffer size if you're going to do gradient clipping uh how you're going to choose your learning it Etc and part of the intuition for why these choices are quite sensitive is that the core algorithmic framework in general might not be convergent and we kind of put these things in as fixes to make it converge now of course there's quite a bit of research on trying to make these algorith more stable and easier to use and you know one thing I will say here is that a lot of sort of bread and butter deep learning improvements do tend to help for example using large networks tends to help if Done Right using the appropriate choice of uh normalization tends to help using data augmentation tends to help uh for those of you that are interested in data augmentation there's some very nice work uh in a paper called drq which explains how data augmentation can actually greatly facilitate stability of Q learning but there might also be some open problems here here that are a bit more fundamental for example it's still a mystery to us why supervised deep learning works so well conventional machine learning theory would hold that supervised deep learning should lead to uh pretty catastrophic overfitting because you are using a model with many more parameters than you have data points in so far as the catastrophic overfitting does not happen with classical deep learning there must be some sort of regularizing effect from the use of large neural networks with stag gastic gradient descent that makes this problem not so severe so there's some kind of magic in a sense that makes deep learning work and there's a lot of active research trying to understand that magic well value-based methods are not gradient descent uh they work on slightly different principles and it's actually a very open question as to whether the same kind of magic that Mak supervised deep learning work still applies to Value based methods perhaps the regularizing effect of using large models with stas to gring Ascent doesn't work the same way in value based methods um so this is very much kind of at the frontier of current research and is perhaps the deeper manifestation of some of these challenges so I don't have an answer here this is something that is an active area of research but is a challenge to keep in mind okay what about policy gradient methods likelihood ratio reinforced trpo poo all that sort of stuff well arguably these methods are somewhat better understood in the sense that uh we do have converion policy grading algorithms in a sense the story with policy gradient is that it trades off a lot of the nastiness in value based methods and modelbased RL for much higher variance so the common theme is that all of the other RL methods have bias from function proximation policy gradient methods generally do not have bias but they do have variance of course once you start using value functions as uh critics for Advantage estimation you introduce that the same bias right back in but in their purest form they have high variance but no bias which means that they are a bit easier to understand but the variance is no picnic it's still a major challenge um and what that variance implies is that you might need lots of samples and while this might at first seem like kind of a um uh maybe an esoteric problems like well if you need lots of samples just have a faster simulator in practice that increase in variance may be catastrophically large it maybe that you don't just need 10 times more samples maybe you need exponentially more samples in the worst case the increase is in fact exponential those worst cases are a bit pathological um and they can be avoided but in the general case it does seem like this can be a challenge and in particular it can be an unpredictable challenge in the sense that we might have a hard time predicting for a new problem whether the catastrophically high variance of that problem might make policy gradings hard to use or not um so the kinds of parameters that we then end up being careful with to address the challenge are things like batch size learning rate and the design of the Baseline for policy gradients which is a very crucial choice modelbased RL algorithms are an interesting one on the surface modelbased RL might seem like a particularly convenient and stable choice because the model learning process in the end just boils down to supervised learning and that is true for a given batch of data training the model is a regular supervised learning problem however modelbased Aro methods are still iterative procedures meaning that the model changes over the course of training and the modelbased ARL method is still collecting its own data this raises a number of Maj issues the model class and the method by which the model is fitted to the data ends up being very sensitive the trouble is that more accurate models do not necessarily translate directly into better policies if the model is perfect of course that will give you the best policy but if the model simply becomes more accurate it could be that the model might become more accurate in a way that doesn't actually improve the policy at the cost of slightly lower accuracy somewhere else which turns out to be catastrophic for the policy so basically not all errors in the model are made equal and that should be pretty straightforward right if you're flying an airplane having a slightly incorrect model about how the airplane flies when it's at 30,000 ft is clearly not as catastrophic as having an error in the model when the plane is landing and every inch to the ground counts so optimizing the policy with respect to the model uh is generally also non-trivial due to this back propagation Through Time issue so we end up using all sorts of other methods including running those same model free algorithms through the model which of course incurs all the challenges associated with model fre RL and the there's this more subtle issue which is that the policy can exploit the model essentially even if the model is very good in most places your policy might discover a way to take just that one action where the policy makes a mistake that causes it to erroneously predict that something good will happen in a sense modelbased RL is a very kind of adversarial process and that presents major additional challenges so all these approaches have challenges those challenges fundamentally actually stem from the same core uh issues the issues having to do with the fact that you have to discover optimal behaviors without ground truth supervision Often by collecting your own data but the way that these issues manifest for each class of methods is a bit different now in regard to efficiency we can um create a kind of a hierarchy of different methods to try to uh gauge roughly how efficient they are this slide uh is actually pretty old at this point this was created uh maybe about five years ago so some of this is a little bit out of date but I think that the overall Trends still hold so we're going to start with the least efficient methods and then progress towards the most efficient methods and the least efficient and I'll say at the end why we might actually prefer less efficient methods in practice but the least efficient methods in the sense that they require the most samples are gradient free methods which we didn't actually cover in this course but these are methods like um C or uh natural evolutionary strategies which actually don't use gradients through neural Nets at all the next step towards more efficient methods are fully online on policy methods like a2c and a3c so these are methods that uh run on policy and use centry policy gradient updates um with fully online updates policy gradient methods like trpo which are batcho policy grading method tend to be a bit more efficient so the fully online methods basically uh don't store any trials they just update as they collect data uh policy grading methods collect a batch of data and make batchwise updates then we have uh a big step uh up in efficiency with methods that use replay buffers and off policy learning so these are Q learning methods act Q function acor critic methods and so on these are all the methods that have replay buffers then we have model based methods and then we have shallow model based methods which are generally the most efficient but also often the most limiting and interestingly enough the step up and efficiency is about an order of magnitude each time so here's an example of a classic paper on gradient free methods for RL using evolutionary strategies and the reported results in that paper are about 10 times less efficient than fully online updates uh with an algorithm like a3c so this is an example of a 2017 paper with a3c showing for a half cheetah style task about million time steps to learn the task to astoic Performance which is the equivalent of about 15 days of real time if we use a method like trpo or po then we get something on the order of 10 million transitions the equivalent of about 1.5 days of real time if we use off policy algorithms with replay buffers then we can learn tasks like this in about 1 million uh time steps which is about 3 hours in real time and these methods have gotten a lot more efficient in recent years in recent years there's been a 10 to 100x Improvement actually in the speed of these methods so this result is actually out of date these things might be even more efficient but I think it's still actually a reasonable rough ballpark estimate that for realistic tasks a few hours of real time is about what it takes to learn policies from low dimensional State and that actually holds even for real world robotic tasks so I think if you want like a rule of thumb a reasonable rule of thumb is that off policy replay buffer methods can if if Done Right Larn tasks in something on the order of single digigit um of course that's not accounting for things like perception learning from pixels and so on model based stal methods can be another order of magnitude faster uh so we're talking about less than an hour and then shallow methods like pilco can be really fast they can actually learn in seconds but they require using non-scalable models like gausssum processes that might Simply Be impractical to apply to higher dimensional systems now this is a very rough guide and of course an obvious question this might raise is if this is is the um kind of hierarchy of sample complexity why would we able prefer the less effici methods well the reason is that actually things like policy grading methods are often more paralyzable which means that if we can run multiple simulation in parallel they can actually be faster in terms of wall clock time and the cost of samples is not the only cost that you pay so if you have access to lots of simulation and you can and generating interaction with the environment is cheap and the cost has more to do with the compute for training your model then maybe you might prefer methods that are less efficient but that require less compute and in fact modelbased DL methods are often the most comput hungry because you might take many grading updates on the model for every simulation step so for this reason you might actually prefer methods that are less efficient but have other benefits like better parallelism or requiring fewer grading updates on the policy or model okay so uh but that said why do we care about sample complexity well an obvious one is that if you have bad simp complexity you have to wait for a long time for your homework to finish but the other thing is that if you actually want to use deepl in the real world poor sample complexity means that real world learning can become very difficult or even impractical it also precludes the use of very expensive high-fidelity simulators maybe you'd like to use some some sort of finite element analysis to simulate a really complex system that might even be slower than real time uh so if your algorithm requires hundreds of millions of Trials that might simply not be feasible and it generally limits applicability to real world problems so developing more efficient RL methods is a major open problem uh I do think that current deep RL methods have improved in efficiency tremendously to the point where real world training often is actually very feasible but in many domains especially as you increase the breadth of the problem meaning that you want systems to generalize more broadly by training in a greater range of scenarios this becomes an even bigger issue so speaking of breadth in general generalization scaling up in generalization are major challenges in deepl when it comes to supervised learning like training on imet or training on common crawl or large NLP data sets the state-of-the-art in supervised deep learning is large scale emphasizes diversity and it's evaluate on generalization so nobody cares how well your language model can memorize a particular piece of text as trained on everyone cares how well your language model can generalize to some new prompt whereas in RL we seem to be often evaluating methods on small scale tasks that emphasize Mastery and are evaluated on performance meaning that what we really measure is how well do we optimize a given objective function in the particular environment where it's optimized and that is often a very reasonable thing to measure so if you're trying to improve the optimization performance of your method that's the thing you should be measuring but besides eval optimization performance in the real world we also care about generalization performance diversity and breadth and that starts impl implicating a lot of topics that go beyond the core questions in basic RL methods and have more to do with the ability to apply RL Methods at larger scale to multitask problems and settings that require using large amounts of data rather than just large amounts of simulation so where is the generalization going to come from there are a number of issues with this so first we could say off the bat well what if we just scale up deepl what if we just run uh massive simulation with lots and lots of different settings and try to get more General generalizable and performant policies this was basically the path towards game playing systems like alphago but it's quite challenging so with supervised machine learning what we do is we interact with the world and we collect the data set and this is typically done once for supervised learning systems and then we run a learning algorithm on that data set for many epochs and get some solution and if we're not happy with that solution we don't recollect the data we just rerun the training after changing something about our method in reinforcement learning we typically learn through continuous interaction with the world so that means that if we want to change something about our method we would typically rerun the interactive learning process but the reality is that actual reinforcement learning has an outer loop and that outer loop is you if you are not happy with how well your method did you would change something about your method and then rerun the training process again and that's fine if your training process involves training the half cheetah to run faster but if your training process involves sort of Internet scale training on a huge range of different settings to achieve real world generalization this OU Loop quickly becomes impractical so for this reason it's very important to think about improvements to RL methods that don't just address the core challenges of optimization but also address workflows that are more suitable for large scale machine learning research this problem is pretty bad right so here's a a video from uh trpo plus G this video is quite old at this point but it's still I think pretty impressive it shows a humanoid learning to run and while it takes a little while and Falls a few times after training is concluded it can run on this infinitely large flat plane essentially perpetually so this is pretty cool it takes about 6 days of uh of real time if this was a real robot of course in simulation goes a lot faster now since then these algorthm have gotten a lot a lot quicker so maybe now it wouldn't take 6 days it might even be as little as 6 hours but it's still a pretty non-t amount of time the problem though is not just that the problem is like if we could just run a robot for like a few days and get a robot that can run anywhere we' be pretty happy with that but that's not actually what we're getting what we're getting is uh something that can run an infinitely large flat plane the real world presents a wide range of different scenarios um the real world is diverse and if you want a practical system that does a task like this in the real world it has to handle all sorts of terrains all sorts of situations and maybe even all sorts of behaviors in service to Locomotion so not just running but climbing over things and so on and an approach that people have taken with some success is to Simply simulate a greater range of situations but this does quickly start presenting major challenges um now those challenges include the challenge of figuring out what all of those scenarios are so you might want real world data to figure out the range of scenarios you have to cover and also actually diving the algorithms that can handle such a broad range of scenarios so in terms of utilizing data perhaps off or offline RL methods can be more effective maybe we could collect a big data set from past interactions to figure out that an effective robot needs to run on Sand and uh in cities and all sorts of other situations and perhaps that we can use this data set with an offline orl procedure where if we're not happy with the solution we could go out and get more data simply added to our data set rather than repeating the process and then if we have to tune something about the method maybe we can do so without having to discard all of our data perhaps we could also approach Us by building simulations and adapting those simulations to the real world if that's the approach you want to take and that's also something that perhaps merits more research the multitask setting uh also presents challenges that are often not uh at the core of reinforced learning research but I think are tremendously important so generalization comes from training in many different settings we talked about a variety of ways to set up multitask learning problems for example you could say that you have multiple mdps and you model them as one multitask mdp different mdps chosen at the first time step uh and maybe this doesn't require any new assumption uh but it might Merit additional treatment to develop algorithms that are effective in these scenarios so while standard RL methods can handle multitask learning it does exacerbate certain challenges that already uh are problematic in RL challenges like variance right if you have many different mdps your variance is going to be even higher because now there's more variability in the isal state challenges like uh sample complexity if you have more mdps then you need more samples uh to train so the existing sample complexity challenges are exacerbated it may be that we can make progress on this problem simply by addressing those four challenges or it may be that we can devise better methods that Target multitask learning in particular this is an important thing to keep in mind but now let's also talk about those assumptions so outside of the capabilities of the core method the assumptions of RL that we have access to a reward function that we have in interaction with an environment these are all assumptions that can be problematic in the real world where does supervision come from for RL uh if you want to learn from many different tasks you need to get those tasks somewhere and it might be in some cases very natural for humans to specify those tasks so if you want a robot to travel to different locations it might be not it might not be that hard for a person to Simply write down oh like like these are the GPS coordinates I want you to travel to practice doing that but in other cases simply specifying what you want the RL algorithm to learn to do can be very hard so if you play a game it's pretty easy to get reward because the game has a score and winning the game can be the reward but let's say that you want to pour a glass of water now this is something that any child could do but if you want a robot to learn how to pour a glass of water simply understanding whether the glass is full of water itself requires a complex perception system this problem has actually come to the Forefront recently because with internet uh chat Bots like chat GPT it's actually a major challenge to figure out whether you are interacting with users in ways that make those users happy ways that satisfy those users and traditional ways of specifying reward tend to fail in those cases so there's all sorts of other things that could be done for example we could learn objectives or rewards from demonstration which is inverse reinforcement learning we can generate objectives automatically with automated skill Discovery to produce a wide variety of different tasks so that we can then generalize to new tasks uh but we can also explore other sources of supervision uh so besides demonstrations uh this is something that's of course been very widely used uh we can think about methods that leverage language uh to figure out uh what the the robot should be doing and perhaps auxiliary supervision from models that combin language and perception that can offer reward signals through generalization from internet scale training we could also Imagine methods that learn from Human preferences pairwise comparisons of different behaviors this was pioneered for reinforced learning Benchmark tasks but has recently gained a lot of attention as the preferred method for training language models to satisfy user preferences so these are all alternative sources of supervision that change the core assumptions of reinforced smoting algorithms and I think it's important to think about the kind of supervision that your particular domain requires and I think there's also a fairly fundamental question when it comes to supervision which is should we be supervising RL agents by telling them what we want them to do or how we want them to do it so demonstrations provide both the what and the how reward functions in principle provide only the what but if the reward functions are more well shaped then they're also providing some of a how so there's kind of a balance of these things and we have to strike that balance carefully because on the one hand the strength of RL methods is that they can discover new and novel Solutions so we don't want to supervise them too closely but at the same time if the supervision is too high level like for example your supervision for your language model chapot is make my company lots of money then it might be just a very difficult learning problem so we have to strike the right balance there and and we might want to rethink the problem formulation other ways like how do we Define the control problem what is the data in some cases it's easier to define a control problem with data that specifies what might happen in the world than to Define it with a simulator or access to an interactive agent so offline RL supports that kind of setting online RL methods do not what is the goal what is the RL agent trying to achieve is the goal specified by reward by demonstration or by preferences and what is the supervision is that the same as the goal sometimes we want to provide the agent with hints that help it learn the task without biasing the solution that it finds and there's been uh research on using demonstrations as guidance rather than necessarily as goal specification but it's an open area of research in general there is no one answer here and that's why this is part of the open challenges lecture but I would encourage all of you to think about the assumptions that fit your problem setting and sometimes the right thing to do is to CSE your problem into the standard RL assumptions but sometimes the right thing to do is to invent a new problem don't assume that the basic RL problem is set in stone and think about how it could be adjusted to fit your setting |
CS_285_Deep_RL_2023 | CS_285_Lecture_8_Part_4.txt | in the next section of today's lecture i'm going to discuss some uh practical considerations that we need to take into account when actually implementing q learning algorithms and then some improvements that can make them work a bit better so one question that we can start with are our q values actually accurate we can think of q values as this kind of abstract object that we use inside reinforcement learning to help us improve our policy and get that arcmax but a q function is also a prediction it's a prediction about the total reward that you will get in the future if you start in a particular state in action and then follow your policy so it makes sense to ask whether these predictions are actually accurate predictions do they match up with the reality do they match up with what you actually get when you run the policy so if we look at the kind of a basic learning curve where the x-axis is the number of iterations of q-learning that we've taken and the y-axis is the average reward per episode and we look at it on a bunch of let's say atari games we'll see that for all of these atari games our average or per episode is going up so things are getting better if we look at the average q values that are being predicted and that's the two plots on the right we'll see that the q function is predicting larger and larger q values as training progresses and that intuitively makes sense as training progresses our policy gets better it's getting higher rewards so our q function should also predict that it's getting higher q values so as predicted q increases and so does the return we can also look at whether the actual q values or value function values occur in places that essentially anticipate future rewards so this is the game of breakout for those of you that are not familiar with breakout the goal is to use the little orange paddle at the bottom to hit a ball and the ball is reflected by the paddle bounces up and hits these colored rainbow colored blocks and every block you break gets you a point a particularly cool thing you can do in breakout is if you break through all of the blocks on one side which is happening there in panel number three then you can get the ball to bounce all the way up and will actually bounce off the ceiling and ricochet off the top blocks and they'll get you lots of points because it's just bouncing back and forth there breaking all the blocks from the top so it's quite a cool strategy if you can break through to the top like that and have it bounce around and the graph at the bottom shows the value function value which is essentially the q value for the best action at different points in time with the particular frames one two three and four labeled on the graph and what you can see here is that some of these values actually make a lot of sense so number one you're about to break a block and you have the highest value after you break that block you bounce back down and your value dips because you know that you're not going to get any points for a little while while your ball flies down needs to get bounced from the paddle in step three you're about to break through the top so your value becomes quite large but in step three you actually don't quite make it so that last red block that you break you break it but then you bounce back down so your valley goes down for a while and then it rises right back up and in step four your value is actually at its largest even though you actually haven't broken any blocks for a while so step four you just bounced off the paddle you haven't broken any blocks but you're about to ricochet off the ceiling and you're about to get those mad points so that's why your value function is actually the largest value it actually goes down from there because once you actually get the points uh the value function is going to drop because it knows you've received your points you're going to get fewer points left over so that will make sense that all seems reasonable and we can also look at the relative value the q values for different actions so these are frames from the game uh pong so in pong you need to use your paddle which is green on the right side to hit the ball so that it ricochets and goes to the other side and your opponent with the orange paddle needs to hit it back and your goal kind of like in tennis is to hit the ball back so your opponent can return it if your opponent can't return it then you get a point if your opponent can return it then they might get a point on you because you might fail to hit it back so what we're seeing in frame one is that all actions have about the same q value take a moment to think about why this might be why does this uh this make sense well the reason it makes sense is because when the ball is quite far away many different actions will still allow you to catch the ball later so even though it might seem like moving the paddle up is the right thing to do in reality the q function here is very good and it understands that even if it fails to move the paddle up it'll be able to move it up at the next time step which actually means that the q values of different actions at this time step are about equal it's a little counter-intuitive at first but it really makes sense at time step two now the ball is getting pretty close to the uh to the zone where you have to return it and here now the up action has a much larger q value so the q function understands that it still has a split second chance to return the ball but only if it moves up right now so now the q value for moving up is very large the q value for moving down or staying still is very very negative and of course in step four once you've actually returned the ball again it's saying that the values for different actions don't really matter so again this basically agrees with with our intuition in terms of their relative values the q values make sense with respect to actions and they make sense with respect to states but there's a little bit of a problem while the relative q values of different states and actions seem to make sense on a successful training run their actual absolute values will in practice actually not be very predictive of real values and you can verify this yourself in homework 3 when you implement q learning you can measure the numerical value of the q value that you're getting and then measure the actual return that you get and compare those two numbers you'll find they don't agree very well there are a few details that you have to get right one thing that you have to get right is that you have to make sure that when you calculate the true value you use a discounted estimator so you calculate the true value by taking the trajectories that you actually executed and taking the reward times step one plus gamma times the reward at 2 plus gamma squared times the reward of 3 etc etc etc and then compare that to the q value at step one because the q value at step 1 is trying to predict the expected sum of discounted rewards so if you compare that to the discount of some of uh of rewards that you actually got if your q value is a good predictor you should see that those are similar and what you'll actually see is that they're not very similar so what these graphs are showing is basically exactly this what you should look at is the red lines the the blue lines don't worry about those we'll talk about those later but the red lines represent the following the kind of spiky red line the one that's usually higher represents the estimate of your hue function so this is basically how uh so this is what your q function thinks the total discount reward that you'll get will be the solid flat line represents the actual uh sum of discount rewards that you're actually getting when you're on that policy and what you're seeing here is that the q function estimates are always much much larger than the actual sums of discounted rewards that you're getting and that seems kind of strange like why is it that the q function seems to systematically think it's going to get larger rewards than it actually gets this is not a fluke it's not just that the q function is wrong and it can be above or below the true reward it's actually systematically larger and this is a very consistent pattern and you try this in homework 3 you'll also see this pattern so why is that this problem is sometimes referred to as overestimation in queue learning and it has actually a fairly straightforward and intuitive reason let's look at how we're computing our target values when you compute your target value you take your current target q function q5 prime and you take the max of that q function with respect to the action aj prime and it's really this max that's the problem so here's how we can think about why a max would cause overestimation let's forget about q values for a minute and let's just imagine that you have two random variables x1 and x2 you could think that maybe x1 and x2 are normally distributed random variables so they maybe they have some true value plus some norms you can prove that the expected value of the max of x1 and x2 is always greater than or equal to the max of the expected value of x1 and the expected value of x2 the intuition for why this is true is that when you take the max of x1 and x2 you're essentially picking the value that has the larger noise and even though the noise for x1 and x2 might be zero mean maybe they're both univariate gaussians the max of two zero mean noises is not in general zero mean so you can imagine that one noise is positive or negative fifty percent the other is positive and negative fifty percent when you take the max of the two if either of them is positive you'll get a positive answer so of course the probability that one of the two noises is positive is going to be pretty high right so in order for them to both be negative that has only 25 percent probability so the one of them being positive that's 75 probability so 75 percent probability when you take the expected value of their max you'll get a positive number when you take the max of their expected values you'll get uh you'll get zero because their expected values are zero when you take the expected value the max you'll get a positive value now what does this have to do with q learning well if you imagine that your q function is not perfect if you imagine that your q function kind of looks like the true q function plus some noise then when you take this max in the target value you're doing exactly this so imagine that your q phi prime for different actions represents the true q value of that action plus some noise so it might be up and down and those errors are not biased so those errors are just as likely to be positive as negative but when you take the max in the target value then you're actually selecting the positive errors and for the same reason that the expected value of the max of x1 and x2 is greater than or equal to the max of their expected values the max over the actions will systematically select the errors in the positive direction which means that will systematically overestimate the true q values even if your q function initially does not systematically have errors that are positive or negative so for this reason the max over a prime of q phi prime s prime f a prime systematically overestimates the next value it basically preferentially selects errors in the positive direction so how can we fix this well one way that we can think about fixing this is to note if we think back to the queue iteration the way that we got this max was by basically modifying the policy iteration procedure so we had our greedy policy which is the arg max over a prime and then we we then send that r max back into our q function to get its value so this is just another way of saying the max over a prime of q phi prime is just q5 prime evaluated at the arc max and this is actually the observation that we're going to use to try to mitigate this problem see the trouble is that we select our action according to q5 prime so if q5 prime erroneously thinks that some action is a little bit better because of some noise then that's the action we'll select and then the value that we'll use for our target value is the value for that same action which has that same noise but if we can somehow decorate the noise in the action selection mechanism from the noise in the value evaluation mechanism then maybe this problem can go away so the problem is that the value also comes from the same q5 prime which has the same noise as the rule that we used to select our action all right so this so one way to mitigate this problem is to use something called double q learning if the function that gives us the value is d correlated from the function that selects the action then in principle this problem should go away so the idea is to just not use the same network to choose the action as the network that we use to evaluate the value so double q learning uses two networks one network which we're going to call phi a and another network which we're going to call phi b and phi a uses the values from phi b to evaluate the target values but selects the action according to phi a so if you assume that phi b and phi a are decorated then the action that phi a selects for the r max will be corrupted by some noise but that noise will be different from the noise that phi b has which means that when phi b evaluates that action if the action was selected because it had a positive noise then phi b will actually give it a lower value so the system will be kind of self-correcting and then analogously phi b is updated by using phi a as its target network by using phi b as the actual selection rule so this is the essence of double q learning and its purpose is to decorate the way that you select the action from the way that you evaluate the value of that action so if the two q networks are noisy in different ways then in principle the problem should go away now in practice the way that we can implement double q learning is uh without actually adding another q function but actually using the two q functions we already have so we already have a fine of phi prime and they are different networks so we'll just use those in place of phi and phi b so in standard q learning if we write it out in this r max way which is exactly equivalent our target value is q5 prime evaluated at the rmax from q5 prime in double queue learning we select the action using q5 but evaluate it using q5 prime so now as long as phi prime and phi are not too similar then these will be decorated so this is the only difference we're using phi to select the action instead of phi prime and we still use the target network to evaluate our value to avoid this kind of moving targets problem now you could say that we do still have a little bit of the moving targets problem because as our phi changes so does our action but presumably the change in the arc max is a very sudden discrete change and it doesn't happen all the time so if you have you know three different actions the arc max isn't going to change as often now something i might mention here is that and many of you might already be thinking about this phi prime and phi are of course not totally separate from each other because periodically you do set phi prime to be equal to fine so this solution is far from perfect it doesn't totally decorate phi prime and phi but in practice it actually tends to work pretty well and it actually mitigates a large fraction of the problems with overestimation but of course not all of them all right there's another trick that i should mention that we can use to improve q-learning algorithms and it's similar to something that we saw in the after critical lecture and that's the use of multi-step returns so uh our q learning target is and here i intentionally am writing it out with time steps is rjt plus the max at t plus one and where does the signal in this learning process come from well if your initial q function is very bad is essentially random then almost all almost all of your uh learning has to come from the r so if uh your q5 prime is good then the target values do mostly heavy lifting if your q5 primes are bad then the only thing that really matters is the reward and that second term is essentially just contributing noise and early on in training your q function is pretty bad so almost all of your learning signal really comes from the reward later on in training your reward get uh you know your q function becomes better and the q values are much larger in magnitude than the rewards so later on in training the q values dominate but your takeoff your initial learning period can be very slow uh if your q function is bad because this target value is mostly dominated by the q value so this is quite similar to what we saw in octocritic when we talked about how the actor critic style update that uses the reward plus the next value has lower variance but it's not unbiased because if the value function is wrong then your advantage values are completely messed up and q learning is the same way if the q function is wrong then your target values are really messed up and you're not going to be making much learning progress the alternative that we had in the ectocritic lecture is to use a monte carlo sum of rewards because the rewards are always the truth they're just higher variance because they represent a single sample estimate we can use the same basic idea in queue learning so q learning by default does this kind of one step back up which has maximum bias and minimum variance but you could construct a multi-step target just like an actor critic take a moment to imagine what this multi-step target would look like if you have a piece of paper in front of you consider writing it down and then you can check what you wrote down against what i'm going to tell you on the next slide sorry actually it's actually on this side so um the way that you can construct a multi-step target is basically exactly analogous to what we saw in the after critic lecture so the way you construct your multi-step target is by not just using one reward but making a little sum from t prime equals t to t plus n minus one and for each of those you take r j t prime multiplied by gamma the t minus t prime and you can verify that if n equals one then you recover exactly the standard rule that we had for q learning but for n larger than one you sum together multiple reward values and then you use your your target network for the t plus n step multiplied by gamma to the n so this is sometimes called an n step return estimator because instead of summing the reward for one step you sum it for n steps so this is the n-step return estimator and just like with actual critic the trade-off the n-step return estimator is that it gives you a higher variance because of that single sample estimate for r but lower bias because even if your q function is incorrect now it's being multiplied by gamma to the n and for large values of n gamma to the n might be a very small number okay so let's talk about q learning with these n n-step returns it's less biased because the target because the q values are multiplied by a small number and it's typically faster early on because when the target values are bad those sums of rewards really give you a lot of useful learning signal unfortunately once you use n step returns this is actually only a correct estimate of the q value when you have an on policy sample so the reason for this is that if you have a sample collected with a different policy then that second step t plus one might actually be different for your new policy right because if on the second step you take a different action that will match what you're getting from your instant return so n step returns technically are not correct without policy data anymore with off policy data technically you're only allowed to use n equals one with n equals one everything is pretty straightforward because you're not actually assuming anywhere that your transition came from your policy your q functions condition on action so that'll be valid for any policy and your second time step where this would matter in the second times if you actually take the max with respect to action you don't use the action that was actually sampled so for n equals one it's valid to do off policy but for n greater than one it's no longer valid basically your new policy might never have landed in the state s j t plus n so why because you actually end up using the action from the sample for those intermediate steps which is not the action that your new policy would have taken as an interesting thought exercise and this is something you can think about at home after the lecture you could imagine how to utilize the same trick that we used to make q-learning off policy to try to make this n-step version off policy as a hint to make the n-step version off policy you can't learn a q function anymore you have to learn some other object condition on some other information and if you think a little bit about how you could do this that might shed some light on kind of giving you a better intuitive understanding for how it is that q-learning can be off policy so as a homework exercise after the lecture maybe take a moment to think about how to make n-steps returns off policy and what kind of object you would need to learn to make that possible so the estimate that we get from regular instep returns is an estimate of q pi for pi but for that you need transitions from pi for all the intermediate steps and this is not an issue when n equals 1. so how can we fix it well we can ignore the problem which often works very well uh the other thing we can do is we can dynamically cut the trace so we can dynamically choose n to only get on policy data essentially we can look at what our deterministic greedy policy would do we could look at what we actually did in the sample and we can choose n to be the largest value such that all of the actions exactly match what our policy would have done and that will also remove the binds so this works well when data is mostly on policy and the action space is pretty small another thing we can do is important sampling so we can construct a stochastic policy and importance weight these n-stop return estimators i won't talk about this in detail but if you want to learn more about this check out this paper called safe and efficient off policy reinforcement learning by munoz at all and then there's this mystery solution that i haven't told you about where you don't do any of this stuff but you condition the q function with some other additional information that allows you to make it off policy and that's a solution you can think about on your own time after the lecture |
CS_285_Deep_RL_2023 | CS_285_Lecture_22_Part_4_Transfer_Learning_MetaLearning.txt | okay next I'm going to talk about gradient-based meta reinforcement learning let's kind of rewind a little bit and think back to what we discussed before about pre-training and fine-tuning so a very standard way to use pre-training in regular supervised learning is to Simply learn some representations and then fine-tune from those representatives for a new task so a particular question we could ask is is pre-training and fine-tuning really just a type of metal learning in some way and if that's so can we make this actually precise can we actually meta train in such a way that pre-training and fine tuning works well and that's basically the idea behind gradient-based metal learning essentially if we have better features then we can do faster learning of a new task and we can actually optimize our features so that learning of the new task is faster so here's how we could fit this into the framework of metal learning that we've developed so far so this is that view of metal learning meta reinforced learning from before where you are meta training F Theta so that F Theta produces phi's that lead to high reward on The Meta training tasks and in order for this to work F Theta needs to be able to use the experience seen so far in the mdpmi to produce this Phi I so what if F Theta is itself an RL algorithm so it's not some kind of RNN that just reads in all the experience it's actually like a reinforcement going out on like a policy grain algorithm and the parametrization of f Theta is really just the initial parameters of the policy that is fed into it so standard RL would take the object of what's called J Theta and maybe we'll do standard row with policy gradient so it will take the current task compute the gradient of J with respect to the parameters and then update the parameters so let's say that F Theta does the same thing F Theta takes the parameters Theta and adds to them the gradient of j i the objective for the mdpmi evalued at Theta so that's F Theta this is the definition of f Theta this equation and you could extend this of course to several gradient steps but for now let's just say one gradient step okay so F Theta updates the parameters Theta with one gradient step can we find a Theta so that this achieves High reward on all the meta training tasks now keep in mind that Computing the gradient of j j i Theta requires interacting with mdpmi it turns out that we can't actually do this optimization this is called Model agnostic metal learning so model agnostic metal learning is basically just a kind of metal learning where F Theta has this funny parametrization that matches the structure of a reinforcement learning algorithm or if you're doing supervised learning it matches the structure of a supervised learning algorithm basically a gradient update so let's uh think about this uh visually in pictures so let's say that you have your neural network that reads in the state and outputs the action let's just think about policy gradient for now to keep it simple uh and instead of trading on a single task and updating with policy grading on that we would have a variety of different tasks so maybe for each task this ant needs to run in a different direction and then for every task we will update the policy parameters Theta with the gradient of the task Theta evaluated at theta plus the gradient of the task Theta so we're essentially trying to optimize Theta so that applying a gradient step on this task increases the reward on this task as much as possible so it's a it's a kind of a second order thing find the Theta so that applying a gradient step increases the reward maximum and if you do this for one gradient step you can kind of visually think of it like this that you have your space of parameters Theta and you're finding a point in that space where the optimal solution for each task Theta 1 star Theta 2 Star theta 3 star Etc is one gradient step away now of course you don't have to do one gradient step you can do multiple gradient steps and it's a little bit more cumbersome to write out the algebra but it's quite doable the calculation requires second derivatives which is a little tricky to implement for policy gradients but it's quite possible so I'll have some references at the end that have the math I don't want to uh you know about Bombardier with a with a wall of mouth for this but it's quite possible to do that calculation and this is basically the idea but let's unpack a little bit what this method does and we'll unpack it using the same tools for setting middle learning that we discussed before so supervised learning Maps X to y supervised metal learning Maps D train in X to Y where X is a test point model agnostic metal learning at least in the supervised setting can also be viewed as a function of D train and X except that the function has a special structure so fmam will apply to the train in X is just F Theta Prime of X where Theta Prime is obtained by taking a gradient step so what this makes uh clear is that this is really just another computation graph it's just another architecture for this function f even though it has gradient descent inside of it you can just think of that gradient descent as part of the neural network and you can implement it with automatic differentiation packages for policy green it's a little bit more complicated for policy gradients you do need to be careful because Computing second derivatives of policy gradients requires some care and regular Auto diff like tensorflow and pipe torch won't do it for you but for supervised learning it's pretty straightforward you could also ask though why do we want to do this then if it's just another architecture and the reason that you might want to do this is that it does carry a favorable inductive bias in the sense that insofar as gradient-based methods like policy gradients are good learning algorithms you would expect this to lead to good adaptation procedures and in fact in practice one of the things that people tend to find with model agnostic metal learning is that you can take many more gradient steps at metatest time than you actually met a trained form so the network tends to generalize and allow you to take more gradient steps which is not something that you can do with an RNN based metal learner because with an RNN or a Transformer it just reads in the training set produces some answer and that's it there's no more there's no notion of training it for longer on the test task because the the learning process there is just a forward pass to the network all right so to give you a little bit of intuition for what model agnostic metal learning does in practice let's say that we have this task which is to distribution of tasks which is for the ant to run either forward or backward or left or right if we visualize the policy for The Meta train parameters so these are the parameters before an adaptation we'll see that the ant runs in place but if we then give it one gradient step with a reward for going forward it'll go forward and if we give it one gradient step of the reward for going backward then we'll happily go backward okay so if you want to read more about gradient-based Metal learning these are papers that describe various policy gradient estimators these are papers that talk about improving Exploration with model agnostic metal learning and these are a few papers that provide that describe hybrid algorithms that are not necessarily gradient based but have a similar kind of structure where they optimize for initializations such that some other Optimizer can make good progress so these are good references check out if you want to learn more about this topic |
CS_285_Deep_RL_2023 | CS_285_Lecture_4_Part_4.txt | in the next part of today's lecture i'm going to give you a kind of a whirlwind tour through different types of reinforcement learning algorithms we'll talk about each of these types in much more detail in the next few lectures but for now we'll just discuss what these types are so they don't come as a surprise later so the rl algorithms we'll cover will generally be optimizing the rl objective that i defined before policy graded algorithms attempt to directly calculate a derivative of this objective with respect to theta and then perform a grading descent procedure using that derivative value-based methods estimate value functions or q functions for the optimal policy and then use those value functions or q functions which are typically themselves represented by a function approximately like a neural network to improve the policy oftentimes pure value-based functions don't even represent the policy directly but rather represented implicitly as something like an arg max of a queue function actor critic methods are a kind of hybrid between the two actor critic methods learn a q function or value function and then use it to improve the policy typically by using them to calculate a better policy gradient and then model based reinforcement learning algorithms will estimate a transition model they'll estimate some model of the transition probabilities t and then they will either use the transition model for planning directly without any explicit policy or use the transition model to improve the policy and there are actually many variants in model based rl for how the transition model can be used all right let's start our conversation with model-based star algorithms so for model-based startup algorithms the green box will typically consist of learning some model for p of s t plus one given st comma a t so this could be a neural net that takes in st comma 80 and either outputs a probability distribution over t plus one or if it's a deterministic model just attempts to predict st plus one directly and then the blue box has a number of different options so let's focus in on that blue box since model based rl algorithms will differ greatly in terms of how they implement this part so one option for model based rl algorithms is to simply use the learned model directly to plan so you could for example learn how the rules of a chess game work and then use your favorite discrete planning algorithm like monte carlo tree search to play chess or you can learn the physics of a continuous environment for a robot and then use some optimal control or trajectory optimization procedure through that learned physics model to control the robot another option is to use the learned model to compute derivatives of the reward function with respect to the policy essentially through back propagation this is a very simple idea but it actually requires quite a few tricks to make it work well typically in order to account for numerical stability so for example second order methods tend to work a lot better than first order methods for back propagating the policy another common use of a model is to use the model to actually learn a separate value function or q function and then use that value function or q function to improve the policy so the value function or q function would be learned using some type of dynamic programming method and it's also fairly common to kind of extend number three to essentially use a model to generate additional data for a model free reinforcement learning algorithm and that can often work very well all right value function based algorithms so for value function based algorithms the green box involves fitting some estimate of v of s or q of s comma a usually using a neural network to represent e of s or q of s comma a where the network takes in s or s com a as input and outputs a real valued number and then the blue box if it's a pure value based method would simply choose the policy to be the argmax of qsa so in a pure value based method we wouldn't actually represent the policy explicitly as a neural net we would just represent it implicitly as an arg max over a neural net representing qsa direct policy gradient methods uh would implement the blue box simply by taking a gradient step or grading a sense step on theta using the gradient of the expected value of the reward we'll talk about how this grading can be estimated in the next lecture but the green box for policy gradient algorithms is very very simple it just involves computing the total reward along each trajectory simply by adding up the rewards that were obtained during the rollout by the way when i use the term rollout that simply means sample of your policy it means run your policy one step at a time and the reason we call it a rollout is because you're unrolling your policy one step at a time actor critic algorithms are a kind of hybrid between value-based methods and policy gradient methods ectocritic algorithms also fit a value function or a q function in the green box just like value-based methods but then in the blue box they actually take a gradient ascend step on the policy just like policy gradient methods utilizing the value function or q function to obtain a better estimate of the gradient a more accurate gradient |
CS_285_Deep_RL_2023 | CS_285_Lecture_15_Part_2_Offline_Reinforcement_Learning.txt | all right in the next two portions of the lecture we're going to talk about classic uh offline reinforcement learning methods that kind of predate the dprl techniques and these are you know generally these are not the methods that you would uh start with if you wanted to use offline rl today uh you would start with methods that we would cover a bit later but i think it helps to discuss these to give everybody kind of the perspective of where a lot of these ideas come from and how people have thought about offline url and batch rl in the past and by the way in terms of terminology the term batch reinforcement learning really kind of was popularized in around the early 2000s and somewhere in the last few years the term offline rl became a little bit more prevalent because it's a bit more self-explanatory it better captures what's really going on and the term batch is kind of overloaded in current machine learning parliaments but they they mean exactly the same thing so batcher if you see a paper that says batch rl it means exactly the same thing as offline rl so the topic that i'll discuss in the next portion of the lecture is batch rl or offline rl by important sampling most of the methods that we'll talk about in this course for offline rl are value-based methods dynamic programming based methods but we will discuss important sampling based methods a bit because this does occupy a significant portion of the classic literature on these kinds of techniques now a lot of this you guys will already be familiar with from our discussion of important sample policy gradients and then that forms the basic idea for uh important sampling techniques for offline and batch rl so we have our rl objective we have our policy gradient just like what we covered before grad log q uh sorry grad log pi times q and the problem that you're all hopefully familiar with by this point is that estimating the policy grading requires samples from pi theta so if you only have samples from pi beta what you would do is important sampling right and we and we learned all about important sampling so far uh we multiply our um our policy grading by an importance weight which is the ratio of the probabilities of a trajectory under pi theta and pi beta and as we saw before when we write out those probabilities all the terms that don't depend on the policies cancel out right so this is just a recap of uh of the policy gradient lecture so the ratio of the trajectory probabilities under the two policies consists of a product of initial states transitions and policies but because the initial states and transition probabilities are exactly the same from both for both pi theta and pi beta it's only the ratio of the action probabilities that shows up and as we discussed before this is a perfectly reasonable unbiased way to construct an estimator for the policy gradient using only samples from pi beta but it has a big problem because you are multiplying together these action probabilities and the number of probabilities that your multiplying in general is o of capital t which means that the weights are exponential in capital t which means that the weights are likely to become degenerate as capital t becomes large by degenerate i mean that one weight will become very big all the other weights will become vanishingly small which means effectively st increases you're estimating your policy gradient using one fairly arbitrary sample mathematically what this means is that although the policy gradient with these importance weights is unbiased meaning that if you were to generate many many different samples or if you were to run the estimator many many times with independent samples on average it would in fact give you the right answer but the variance of the estimator is very large in fact it is exponentially large which of course means that you need exponentially many samples to get an accurate estimate can we fix this uh well before before i actually break down this equation one comment i would make here is that we did see in our discussion of advanced policy gradients that a common way to use important sampling in practical modern policy radio methods is to simply drop the p of a given s terms in those weights for all the time steps prior to t and in the advanced policy gradients lecture we learned how this is reasonable to do if pi theta and the policy that generated the data which in our case is pi beta are similar that doesn't really apply to offline rl because in offline rl the whole point is to get a much better policy so uh let so the short answer to can we fix this is no we can't but we can uh sort of ruminate on this point a little bit more and to ruminate on a little bit more we can uh separate those weights into two parts so you can think of it as a as a product of all the action probabilities for the whole trajectory uh the product from 0 to little t and the product from little t to capital t and you can break that into two halves and you can put those halves one of them in front of grad log pi and one of them after now i didn't actually change anything this is just the uh because multiplication commutes so just there's just exploiting the commutative property to just write the same exact importance wait in a different way but writing it in this way makes it a little bit more apparent that there are really two parts to the importance weight the part that multiplies all the actions prior to little t which you can think of is essentially accounting for the fact that pi theta has a different probability of reaching the state st than pi beta does and then all the stuff afterwards which basically accounts for the fact that your value the value q hat that you estimate by summing up the rewards in pi beta might be different from the value that you would get from pi theta so the second part of the weight accounts for the difference in reward to go the first part accounts for the difference in probability of landing in st because we have states sampled from d pi beta and we want states from d pi theta and the second part accounts for having the incorrect q hat because q hat here in the classic monte carlo policy gradient is formed by just summing up the rewards that you saw from pi beta and instead what you want is the rewards from pi theta um so you could disregard the first term that's what classic uh on but it's a classic on policy techniques with multiple grading steps too so that's what for example ppo does these are methods that do collect active samples but then they take many gradient steps with an important sampled estimator and then collect some more samples and the justification for dropping that is basically if the policy that collected the data is close enough to your latest policy then it's okay to disregard this because you have a bound as we discussed in the advanced policy gradients lecture so that's why that's a that could be a reasonable approximation but only if you are willing to not have pi theta deviate too much from pi beta so we could talk about uh just the other term um so the other term uh naively you would estimate q hat by just summing up the rewards from pi beta but what you want is you want to sum up the rewards from pi theta uh so you could think about breaking up this uh these importance weights even further uh you could say that the this the sum q hat is really a sum from t prime equals t to capital t of the reward that you actually saw at t prime uh multiplied by the whole importance weight so i didn't change anything here that's just the distributive property um but you know that actions in the future don't affect rewards in the past so one of the things you could do is for any time step t prime you could sum up only the um the actual probabilities from t to t prime not from t to capital t right so essentially i have the reward at the the current time step that thing doesn't affect that thing is not affected by the action two time steps from now so i can exclude that from the important suites the importance weights are still going to be multiplying o of capital t terms so in terms of big o this didn't actually get any better but it does mean that we have lower variance for time steps closer to the current one so it's still exponential but it's a little bit better um in fact it actually turns out that to avoid exponentially exploding importance weights we must use value function estimation there's a there's actually no way to avoid the exponential problem altogether without using value function estimation but that hasn't stopped various techniques in the literature from trying to still make this not as bad none of them avoid the exponential problem altogether but there are many ways to still reduce the variance to make it more manageable so one of them is the one on the slide which is to only multiply together the action probabilities from t to t prime rather than from t to capital t for each for the reward of time step t prime um but there are better ways to do it so later on we'll talk about how this would work if you if you knew q pi theta or if you had to learn q pi theta but first let's conclude our discussion of important sampling with a few other ideas that that have been explored in the literature so one idea that is uh worth discussing because you know it is served to inspire quite a few more more recent techniques is the idea of the doubly robust estimator you can think of the doubly robust estimator as a little bit like a baseline but for important sampling so this is just the the important sampled value estimate from the previous slide and it's still exponential in the time horizon so notice that it's multiplying together the action probabilities from little t to little t prime and little t prime goes from you know goes all the way up to capital t so at the very last time step you're still multiplying together of capital t probabilities i will for simplicity i'll just drop the indices and i will turn my t primes into t just to keep it simple so before i notice that i was writing v of s t now i'll just write it as v of s0 i'll just write it out for the initials time step mostly this is just to declutter my notation but you can basically substitute in replace the zero with a with t and you would get all the stuff from before and what i'm going to try to do is i'm going to try to reduce the variance of these importance weights further so i'm going to introduce a little bit of notation i'm going to introduce rho t prime and rho t prime will denote p theta a t prime given s d prime divided by pi beta a t prime given s t prime so i'll just condense that ratio into rho t prime so now we can see that this important sample value estimator is a sum over all the time steps of a product of all the rows up until that time step times gamma t rt okay and it's that that product of rows that we're concerned about so if we were to actually write this out uh just like actually expand the sum you would get the first term which is rho 0 times r0 then you get the second term which is rho 0 times gamma times rho 1 times r1 and then you would get rho 0 times gamma rho 1 gamma rho 2 gamm et cetera r2 okay and i i wrote it in kind of a slightly counterintuitive way intentionally where i actually interleave the gammas with the rows so i could also just collect all the gammas and just write gamma to the t but i intentionally wrote it this way so that you get this alternating pattern of rho gamma rho gamma rho gamma and this is going to be important later so the re so what we can then do is we can put some parentheses around you'll notice that every term in the sum starts with row zero so you can just take all the row zero out and collect all the other terms in parenthesis so now you have row zero times in parenthesis a big sum which consists of all the other stuff r zero plus gamma plus all the future stuff and then you can repeat the process you can collect all the terms that have row 0 and row 1 and that's the second set of parentheses so that's that's why you have gamma rho 1 and then in parenthesis r1 plus all the other stuff and you can just keep going like this and you get all these nested summations right we're not actually changing anything we're just basically using distributive and commutative properties of multiplication and addition to group these terms together okay so just a little bit of uh arithmetic a little bit of algebra and let's uh call this v bar superscript t v bar superscript t is an important sampling estimator of v pi theta s zero and now you can notice that there's a recursion here so v bar t plus one minus t is equal to rho t times rt plus gamma v bar t my capital t minus little t okay so essentially v bar to the capital t minus little t that's the stuff in parentheses after the gamma okay so if this is not completely obvious you may want to pause the video here uh you could consider even getting out a little sheet of paper and just working this out to convince yourself that this is true this is a little subtle right we're introducing a little bit of notation to induce a recursion that allows us to describe this important sampling estimator in a recursive way okay so our goal is to ultimately get v bar capital t and this recursion describes a way to essentially bootstrap our way to v bar capital t so now let's talk about doubly robust estimation doubly robust estimation is a little bit easier to derive first in the case of a banded problem so in the bandit case there's only one time step and all you're trying to do is you're trying to estimate the reward now you can still do this with important sampling doing a banner with a board sampling is a little weird but it works and it gives us the intuition for the multi-step case so a regular important sample bandit would just be row essay times rsa right so we have rewards from some other distribution and we're going to multiply them by importance weight and that'll give us the value of our bandit and this this is a contextual banner right so this is a banner that has a state but now let's say that we have some yes as to the value function this guess doesn't have to be very accurate right so we have some guess v hat s and some guess q hat sa and you know how do we get this guess well maybe we just train a neural network to regress onto the values right one thing we need here is is we need v hat s to actually be the expected value of q hat s a with respect to the actions but that's pretty easy to to get you know you could just estimate q hat sa and then just estimate v hat s with samples from your policy okay then the doubly robust estimator basically takes the this importance weighted uh rewards subtracts your estimated uh function approximation and adds back in its expected value so this is a lot like a like the control variants or baselines that we learned about before and the doubly robust estimator is going to be unbiased in expectation just like the baseline regardless of the choice of q hat so long as v hat is in fact the expected value of q hat but of course the closer q hat is to the true um to the true q values the lower the variance of this will be because uh you know in the best case if q hat perfectly cancels off rsa here then that second term the high variance important sample term goes to zero and the first term which has very low variance dominates so just like the baseline allowed us to reduce variance this function approximator allows us to reduce the variance of this important sampled estimator now this is the banded case the real trick with the doubly robust estimator is to extend it to the multi-step case and what we're going to do is we're going to take this thing in the blue box and we're going to try to apply the same idea to these v bars so let's do that so we're going to define in the same way that we define v bar capital t plus 1 minus little t we're going to define a doubly robust version v bar capital t plus 1 minus little t and the way we're going to do that is we will directly substitute in the banded case into this so in the banded case we're doing an important sample estimate of rsa now we're doing an important sampled estimate of rt plus gamma v bar capital t minus little t so essentially this equation that i have the v bar dr is exactly the banded case in the blue box but with r replaced by r plus gamma v bar capital t minus little t so it's essentially a recursive version of the banded case for the multi-step problem so in order to do this you need to construct an estimate q hat state and q-hat sd80 could be some some neural net your favorite function approximator and you need to get its expected value with respect to the actions distributed according to your policy pi theta and that gives you v hat and then just like the recursion v bar capital t plus one minus little t can be used to obtain the uh important sampled estimate the the the doubly robust recursion can be used to obtain a doubly robust version of the important sample estimate okay so that's the idea behind the doubly robust uh off policy value evaluation now this is an off policy evaluation method it's an opd method it is not a reinforcement learning method so this will give you estimates of values and you could use those values just to evaluate which policy is better or you can plug them into an important sample policy grading estimator okay there is one more topic that i want to very briefly cover i'm not going to go into the technical details for this but i just want to describe this so that all of you are aware it exists which is marginalized important sampling so so far when we talk about importance sampling we always talked about important sampling for the case where you're computing importance weights as ratios of action probabilities but it is actually possible to do important sampling with state probabilities so the main idea in what is called marginalized important sampling is that instead of using a product of action probabilities like we did before we're going to estimate importance weights that are ratios of state probabilities or state action probabilities now the difference between states and state actions is not actually that different because once you have a ratio of state probabilities it's very easy to turn them into ratios of state action probabilities because you know a given s but if you can recover these state action importance weights then it's very easy to estimate the value of some policy just by summing over all of your samples and averaging together the weighted rewards so doing off policy evaluation is trivial if you can recover these wsas and of course if you can do off-policy evaluation then you could also plug those value estimates into policy grading as well if you prefer but typically marginalized importance sampling in the literature has been used just for off policy evaluation i haven't seen it used very much for policy learning although i think that you know should be possible so the biggest challenge with this is of course how to determine wsa and typically since we don't know the state marginals of either our policy or the behavior policy what we would typically do is we would write down some kind of consistency condition on w and then solve that consistency condition you can think of this consistency condition as kind of the equivalent of the bellman equation but for importance weights so so the bellman equation describes the consistency condition on value functions that like for example qsa should be equal to rsa plus gamma q s prime a prime that's a consistency condition and if you can make that equality hold true everywhere then you will recover a valid q function in the same way you can write down a consistency condition for w and if you can make that condition hold true everywhere then you will have recovered the true state or state action importance weights so here is one such consistency condition this is from a paper by zhang at all called gendis i won't go through through this in detail because the derivation for this is kind of involved and i've already spent quite a bit of time on important sampling but i want to just give you a taste for what the general gist of these of this is so if you look at this consistency condition you can see that on the left hand side you have the probability of seeing a state and an action s prime a prime under the behavior policy pi beta times the weight now what's going on here well if you multiply d pi beta by the weight you get d pi theta right because the weight is d pi theta over d pi beta so what this is really describing is a condition that state action marginals need to obey when it comes time to actually optimize this in practice of course we're going to subtract the right-hand side from the left-hand side and as long as we get a multiplier of d pi beta in front of everything uh that's then we can approximate that with samples so in reality we never actually have estimate these d pi beta terms directly we always use samples from our data set as samples from d pi beta so the left-hand side of this is just the probability of seeing s prime a prime under the policy pi theta the probability of seeing s prime a prime is basically equal to the probability that you start in s prime a prime and that's what the first term captures so it's the probability you start in s prime times the probability that you take the action a prime plus the probability that you transition into s prime a prime from another state the probability that you transition to s prime a prime from another state in this case s a is given by the probability that you are in that state which is d pi theta s a and that's exactly what the product of those last two terms gives you times the probability that you make that transition which is what you get by multiplying it by p of s prime given sa times pi theta a prime given s prime so this is the probability of starting in s prime a prime and this is the probability of transitioning into it from another state multiplied by the probability that you are actually in that state which is what the last two terms account for so solving for wsa typically involves some kind of fixed point problem so it involves subtracting the right hand side from the left hand side and minimizing the square difference and the trick in deriving these algorithms is to basically turn that difference into an expected value under d pi beta because once you can express it as an expected value under d pi beta then you can use samples from d pi beta to estimate it and that means that you never have to explicitly approximate you never need to have a neural net that approximates d pi beta or d pi theta you only need a neural net that approximates w okay so this is the basic idea of marginalized importance sampling write down a relationship between the w's at future states and current states turn that relationship into an error and then estimate that error using only the samples in your data set and then typically you would represent the w's with some kind of neural net okay so that was kind of a quick whirlwind tour of important sampling for off policy evaluation and batch rl if you want to learn more about this uh classic work on important sample policy gradients and return estimation uh by doing a pre-cup as well as by pension and shelton doubly robust estimators uh very interesting if you want to learn about ope with importance weights so certainly for bandits and for small mdps doubly robust estimators are typically the methods of choice uh if you're doing things like ad placement uh stuff like that but there are better techniques for actual learning policies these days some analysis and theory so this paper by philip thomas high confidence of policy evaluation provides a lot of analysis of these types of estimators and if you want to learn about marginalized importance sampling consider checking out these two papers as well as the zhang adult paper that i referenced on the previous slide |
CS_285_Deep_RL_2023 | CS_285_Guest_Lecture_Dorsa_Sadigh.txt | here at Berkeley uh and she works on all sorts of things that broadly speaking involve human interaction and robots and learning I guess yes some some combination some ven diagram involving those three things uh and today she'll tell us what Interactive Learning is right thank you so much thanks for the introduction Sergey and thanks for inviting me excited to be here for the last lecture of this course actually first time that I'm in this building so it's fancy building I really like it um so yeah so today I want to talk a little bit about some of the work that we have been doing in my lab around this idea of Interactive Learning and Sergey was just asking what is Interactive Learning uh and when I think of Interactive Learning basically I think about this idea of learning from various sources of human data and you could learn a robot policy you could learn a reward function you could learn a representation but the idea is that there's a human that is providing that data you could interact with that human or you could collect offline data from that human and you could try to actually learn from that and and that is a topic that we have been thinking about in my lab for for a period of time so let's start the talk with that idea and I'm going to talk about kind of like a recent Journey that we have had in the age of large language models and how some of these things have changed over the past year and kind of like how our views have changed about this idea of Interactive Learning now that we have large language models Vision language models and how we should think about that um I know I have until 6m might not be able to get through all of this which is fine feel free to stop me at any point in time we can chat about things I don't really need to go through all dislikes okay so let's start with kind of like one of the tasks that we are pretty interested in in in the lab just to motivate some of the problems that I think are difficult in robotics um and and one of these problems is the problem of assisted feeding so so we have been looking at this problem of picking up food and thinking about transferring food to a person's mouth in a way that is safe and comfor comtable um and and this is a very interesting robotics problem because you have to pick up things like food that's deformable um and and you also need to like transfer that to to a person so you have human robot interaction and you need to make sure that that is safe and comfortable uh and it started with kind of like a baseline policy that is doing visual servoing so it figures out where your mouth is it does have a camera it figures out where your mouth is and it tries to go to a fixed offset from your mouth and it's very like extremely uncomfortable looking at this video makes me sad um and and what we wanted to do was we wanted to think about like how we could go about improving improving some of these policies and and the question is maybe we could use some level of learning here maybe we could try to reinforcement learning right one of the things that you guys study in this class um and it turns out that enforcement learning is really difficult here because first I don't have a good simulator right like I don't really they are um and how the mouth interacts with the carrots and and and the fork and also if I were to do reinforcement learning in real I really don't want to hit the person's nose and like get a negative reward that also wouldn't wouldn't be that ideal um so there's also the Other Extreme which is where we could collect some amount of data we could try to do imitation learning right like we could collect some amount of expert data and from that expert data maybe I can figure out what what is the right way at least like what are the parameters of like this idea of transferring the food to and that's something that we have been working on in general but it turns out that even when you're looking at data collection and doing imitation learning collecting data is is honestly like not that easy right like when you start like going that route of let me do Super Wise learning let me just collect a lot of data from humans and just be like humans you start to that well first off you need specific type of devices to collect that data you need to have a VR system there's really nice like that loha system that s and Chelsea have been looking at right the B manual setup where you could actually teleoperate things really nicely but again you need to have like forarms they're cheap I understand but you still need to have like a fullon setup uh to actually like collect that data and and that tends to put some constraints in terms of data collection and then beyond that when you start to collect that data there's a question of like how that data looks like like it is like human data tends to be pretty different from like scripted data if you look at human data um we tend to call it suboptimal I don't know if suboptimal is necessarily the right word for like human data but the humans for example have pauses in their demonstrations and like the very first thing you do is you remove all the pauses because you want to have clean data I think a good question to ask is like why is it that humans have pauses in their data and and I'd argue that one reason is when you're looking at humans cell operating and and picking up an item um they're not just picking up the item they are they're thinking about other things everything about dinner right they're doing other processes other cognitive processes in their head so so you can't just kind of like restrict them to like picking up the cup uh and that kind of like makes the data Maybe not as clean as scripted scripted data and there are other types of biases or suboptimalities that are present in human data and again and again with we kind of like see this when we collect human data and we try to learn from that human data so so in practice it turns out that demonstrations they're good sour s data but they're not the only thing that we could tap into and fori of time something that we were excited about and we are looking at demonstrations but something that we excited about is maybe we could tap into other sources of data so so if you're looking at humans they tend to leak information all the time or there are other ways of interacting with humans Beyond expert demonstrations that we could actually look tap into and try to learn from and one of these sources of data is is Parise comparisons so what I mean by is instead of like me asking a person to tell operate the robot I could show two different trajectories or I don't five different trajectories and I could actually ask a person well what your preferences are or ask a person to provide a full on ranking uh and from that I could learn something about what it is that a person actually wants and we've been looking at this idea of learning from par wise comparisons for for a period of time but there are also other sources of data like if you have a robot it does have an embodiment you can physically move it when you're physically moving it there's called bit of information about that movement and what you could actually learn from that movement and also like I mentioned sub optimality and there are various ways of like handling suboptimality so I'm sure in this class you have looked at offline orl like that's potentially one way of going about learning from suboptimal demonstrations but you could also take like uh more like imitation learning type approaches and think about like weighted approaches to to to to to kind of like um take take away that perspective of IM and then do imitation learning on suboptimal demonstrations and that is also a perfectly fine way of going about things so so we have been looking at all of these different domains but in the first section of the talk but I'd like to talk a little bit about is mainly that idea of learning from Parise comparisons and then after that I want to kind of like switch to this this kind of like fact that we have now we are now living in the world of large language models and how things evolve in in that setting so so let me start discussing Interactive Learning learning from preferences uh first and then after that we can talk about LM all right so so what does it mean when you're trying to learn human preferences from peris comparisons so so in practice when we asking these types of questions from people um we could try to figure out how they act in the W like build a model of the person but you could also try to figure out how the person wants the robot to actually act in the environment for example how a person wants an autonomous car to drive or how a person wants I don't know a robot to open a drawer or uh maybe like an exoskeleton to how how that exoskeleton should help you like walk or do various and the question what is from these parabis comparisons so in this body of work what we're looking at is we're looking at reward functions as a representation that you could actually learn from perise comparisons they seem to be an okay and compact representation that you could learn and then after that you could optimize that you could find reinforcement learning and that tends to be General a powerful like way of going about learning models of humans I want to to point that reward functions are not the only thing that you could learn there's actually recent work from Brad nox's group where they are looking at Advantage functions and and it seems that human preferences like asking these Paris comparisons are actually more representative of Advantage functions as opposed to rewards functions um but you could you could go about learning let's say reward functions and for this first section let's imagine we're trying to learn reward functions okay so then the setup is um not sure what that war temperature um so the setup is that I'm going to show two different trajectories to a person or and different trajectories and then I'm GNA ask a person well what do you prefer and then based on that response that the person tells me well they like one over two or two over one then that is going to give me some information about the underlying reward function and for a second imagine that this reward function is simple it doesn't need to be this simple but for a second imagine that the reward function is simply a linear combination of a set of nonlinear features some W Times some set of fees okay some some set of vectors uh Vector of W Times some set of features okay so so for instance you can imagine W lies in a three-dimensional space right W1 W2 W3 and then you could you could really imagine that your W lies on the unit B because the thing you care about is in your report function is the relationship between these different features and you can sample from this this unit ball your true W is somewhere here okay then every question every query that you're asking from a person do you like a or do you like corresponds to a separating hyper plane in this space corresponds to the plane of w. Fe being equal to zero that be being the difference between the features over trajectory one versus trajectory two okay and the human telling me well I Like A over B or I like B over a that basically tells me which side of the hyper plane is perfer do like the right side of the hyperplane or do like the left side of the hyper plane and from that response what I could do is I could kind of like realize that the true W lies on the right side of the hyperplane as opposed to the left side of the hyperplane so so in practice from one question that I asked from person I could kind of like T in my search space I could almost like remove everything from the wrong side of the hyperplane I'm not going to do that because humans are noisy so so I'm actually going to use a boltman rational model of humans assume that they are noisy so so I'm going to kind of resample my points and put higher weights on the W's that are on the right side of the hyper plane because the person told me they like A over B that tells me the true W is somewhere okay so that was one question so then interesting research question here is what is the sequence of of informative diverse questions that I could be asking from a person so that I could quickly kind of converge to that true reward function and that very much sounds like an active learning problem right like the same Active Learning problem that we have seen in in machine learning from Wayback right like like it kind of like shows up like is very much a similar type of question you might have seen this in movie recommendations like if you look at like the Netflix Challenge from I don't know 2007 or something like like they were also looking at a very similar question right like what is a pair of movies that I can ask you and I can ask your preferences so that I could quickly figure out what your preferences and that was movie preferences this is about your reward preferences about how a robot should do a task it could be a very functional thing like how do I open a drawer or it could be a very stylistic thing like it could be about your preferences about being gentle when you're trying to I don't know open a drawer so so nowh here like we are defining what that reward function is actually representing so when I say human preferences what I mean is like truly a reward function a functional rewards function that you could go and optimize and then kind of like get a policy for your robot and and actually execute with that pulse so so how do you solve this if this is just like mov your recommendations then it should be easy right like we we should just run the usual Active Learning methods and and that should just generally work and and that is true to some extent I think there are a number of challenges that is worth mentioning which is that we are trying to bring this idea to the field of Robotics where we looking at a continuous space it is not like I have a library of like movies that I can pull from I don't have a library of trajectories that I can pull from what I would like to do here is I would like to actively generate questions I would like to actively synthesize like two two questions and ask a person well which one do you prefer and then that seems to be a little bit more complicated than just like optimizing for information when when I simply have like a library of like I don't movies Su to kind of like pull out from so um I just have like one slide on this I'm simplifying this a little too much um ERM here was a PhD student in now group his thesis is basically on this I'm summarizing his thesis in one slide but but basically the idea here is what we would like to do is you would like to find a set of trajectories like two trajectories maybe that you like to generate and the feed here corresponds to those two trajectories the Fe is the difference between feature vectors of those two trajectories and what we like to do is we like to find trajectories that are information gathering so what we're going to optimize is some sort of information theoretic metric uh and then we have explore the number of information theoretic metrics you could use Information Gain you could use determinantal Point processes for a measure of diversity or here in this case what we are really optimizing is the volume that would be removed from the hypothesis space after asking that question because when I ask a question that's a hyper plane you're going to answer it based on that answer I'm going to remove something from the hypothesis over the true W and what I would like to do is I would like to find a question that's informative that is going to remove a lot of volume from from that hypothesis space and this objective is simply saying it what this objective is saying is that maximize that volume what is that volume I said we're looking at a unit ball so the volume of the unit ball is one if you tell me you like A over B or if you tell me you like e over a in either case there's going to be some volume that would be removed from the hypothesis space based on some human update function that is noisily irrational and in either case right like the expected volume that would be removed in either case I'm going to minimize that so the minimum volume that would be removed I'm maximizing the minimum volume that would be removed from the hypothesis SP okay so so that is kind of like the metric of information that we are we're optimizing here this is a modular objective so you could use the same sort of theory that's a modular optimization methods usually use and and then you can get some convergence results here because because of that but there's one extra thing that I want to mention which is that again we are in a robotic setting so this is not an con straint optimization in this case we are looking at a robot that needs to satisfy things like Dynamics and since I'm generating these trajectories I'm synthesizing these trajectories from The Continuous space I need to make sure that those trajectories satisfy a bunch of constraints so so again I need to like really solve a constraint optimization here to be able to generate this next most informative question that I could be asking okay so that was kind of to the extent of math that I want to go into here but again like simplifying it to to some level but if you do that in a very simple setting you're going to be able to like kind of like learn reward functions for simple tasks like having having a driving simulator after zero questions you don't really know what to do but after 30 questions you kind of like learn how to drive um and then you kind of learn how to keep heading and then finally like after 70 questions the car that we are driving is the orange car it generally learns how to do collusion avoidance and drive avoid like obstacles and things like that and I do think this is interesting because this is 70 binary questions I didn't collect any demonstrations I just asked 70 like kind of like binary pair wise questions and from that I'm able to learn something that I had a hard time like tuning myself H getting this this this this autonomous car to drive in this very simple simple driving simulator one other point that I want to mention here is that the simplest form of this was with that linear reward so kind of like the common complaint here is that oh that was a linear reward reward at the end of the day I have a neural report I'm not going to write a linear reward function that's very fair right like in practice we're going to write nonlinear reward functions neural report functions and in kind of like extensions of that work we have been looking at this idea of active learning you're going to lose a lot of like Theory and math around it but you could still like optimize for the most uncertain uh uncertain set of questions and still try to learn what the reward function is and one of the settings that I actually looked at this is uh kind of like a setting where you're wearing an exoskeleton and you're trying to learn human preferences like human walking preferences when they're wearing these exoskeleton so this was in collaboration with Folks at calc uh and and usually people wear these exoskeletons when they're trying in rehab when they're trying to learn how to walk again and and different people have different gug preferences so what you'd like to do is you'd like to query people to figure out what those gate preferences are so you could quickly kind of like converge and and and get a sense of how to help this person uh walk in these settings one other point that I want to mention is that again like every time I talk about active learning to a machine learning crowd like usually there's skepticism and it's a fair fair skepticism because if you look at Active Learning any Active Learning paper has like two cars that kind of look like that uh one is active learning the other one is random sampling they kind of look close and eventually like converge and is it worth it to do Active Learning and I think it's a very fair question of is it worth it to do Active Learning and I'd argue that in a lot of machine learning settings it's actually not worth it to do Active Learning like random sampling is simpler and it's just fine like and and you have the computer you have the bit to actually run random sampo but I don't you that in this robotics domain that difference between those two cures is actually important that difference is going to be the difference between sitting in that system like for three hours versus half an hour like if you're running a user stud if you're interacting with actual people with a real robotic system the difference between three hours and half an an hour does matter so so I I do think there's quite a bit of room for active learning in the space of Robotics and interacting with humans because that starts to matter that time like that time complexity and Sample complexity starts to matter way more than like a lot of machine learning settings all right so um so we have talked about this idea of learning human preferences and asking questions par wise comparison comparison questions from from a single human and in general I think that that's like an okay idea that you could use in in robotics uh but an interesting thing is again it applies outside of Robotics too right like it doesn't need to be in in robotics so so we decided to use a very similar idea in in a different domain in negotiation domain so so so this is a negotiation domain where we have a bunch of items so we have one book two hats two balls a bunch of shared items and then the idea is that we have Alice and Bob here and Alice and Bob are trying to negotiate on the shared items here okay uh and Alice and Bob individually have their own utility so they can see their own utility um and and the idea here is that Bob can come in and we have a bunch of like action items so proposing or like accepting a proposal or like rejecting a proposal and so on and Bob here comes in and says well based on my utility what I'm going to proposes and then take zero books two hats and two balls and the question is what should Alice do so can I build an AI agent can I can I build an AI agent for Alice so that Alice can actually negotiate with B what should I do any any suggestions what's that definitely ask for Al so you're telling me what the policy is right so you're saying okay good policy for Alice is to ask for a book right like that is what Alice says how do you come up with that policy what do you guys do in this class so I for reward function and then what what you do with that reward function well right yeah so so you could you have the reward function for you saying try to infer reward function but Alice knows her reward function it's right here Alice has access to her utility to kind of like have a reward function and if I have access to my reward function I could optimize that I could run R right like I could I could do reinforcement learning with that I guess does Alice have access to Alice doesn't have access to B so I I would suggest uhuh so so you could kind of like explore and then like take actions and try to like explore like what Bob is how Bob is actually like going to respond you could do a model based approach and try to like actually learn Bob's reward function and use that you could also kind of like take a more model-free approach and and and just like propose a bunch of things and and you have you know your own reward function but like take exploratory actions and see how Bob responds to that and then based on that decide like what to do okay that's like a perfectly fine that's right here so so that's a perfectly fine way of doing things so you could do reinforcement learning like that that would be the game you could also solve the game right like you could take a game theoretic approach you know your utility you could do reinforcement line problem with reinforcement larning and the problem with reinforcement learning is if you if you do reinforcement learning in this setting your Alice agent is going to be a little too aggressive uh your Alice agent is going to insist on getting the same thing and that Bing and badgering and being kind of like aggressive and and it is kind of like fair for this reinforcement learning to be aggressive because nowhere in the rewards function I said well when you're trying to do this maybe try to be polite or maybe try to be fair or don't be too aggressive that doesn't sound very human likee so so there are a bunch of things there are a bunch of objectives that like I never said that's part of the reward function and there's no reason for Alice to optimize for this okay there's also the Other Extreme which is I could collect a data set and I could just do supervised learning I could do imitation learning and then there's actually like like this kind of a game is coming from a paper called Deal or No Deal there's a data set that comes with the paper uh and you could actually train on that data set and it turns out that for some reason that data set is extremely nice so the supervised learning agent is on the other end of the spectrum and is very agreeable so so your Alice ends up just agreeing with whatever Bob says and that is also not extremely human like so so in some sense this is kind of like the usual value alignment question this is usually like the usual like reward design question I don't really have access to the True Rewards function this utility is not really capturing the True Rewards function that we are actually after and one way of getting that is active learning so exactly that algorithm that I mentioned earlier you could basically apply that algorithm and try to identify novel scenarios and from those novel scenarios you could ask an expert like how you would act in this scenario and try to like identify a better version of of this reinforcement learning agent that is like more aware of of some of these properties that you're actually after so so that is a perfectly fine way of doing things and actually in fact like May said uh kind of did this at IC in 2021 where they kind of like had this targeted data accusation approach where you actively ask questions and you end up with a better negotiation agent better than reinforcement learning plus initation learning that to like capture human preferences but the interesting thing is you could do something else instead of asking a single person instead of like doing this active learning from a single person um another thing that you could do is you could technically qu a large language model and just ask a large language model what it thinks and and kind of like treat the large language model as a proxy reward function for this task and at the time when we were thinking about this we were extremely worried about this idea uh because it sounded kind of crazy that we using the llm as a reward function uh but actually since then there there are a number of works that are using llms and blms as reward functions or Su or success detectors and and this was very much also trying to the same thing in this negotiation game right like in this negotiation game because it's a text space game I have a lot of like information like on internet about negotiations and about text and some and because of that an interesting thing I that I could do is I could simply ask an llm was this negotiation okay or not right I don't need to ask an llm to write down the word function for me but I can ask an llm to assess was this okay or not was this polite was this fair and the way we actually do this is is we kind of create a prompt where we say Alice and Bob are negotiating let's say have to split a set of books hats and balls and then what we're going to do is we're going to give it an example so so we have Alison Bob kind of negotiating maybe you're looking at the property of Versatility here so we say well this was a versatile negotiation and then after that we have a policy our initial policy is a random policy we're going to roll out to that random policy and then we're going to get that policy to play with the other agents and then at the end you're going to ask well was this policy versatile was Alice versatile in this last POS and this is how a prompt looks okay so so then you're ask an llm like what do you think uh and and from the output of the llm we can get lot of probabilities or we could simply get a yes no answer but that output is a signal about the report function right like it's not writing down a report function for me but it is some signal that it could actually use and I could go and use that and further train my agent based on that signal and once I take that signal I could actually train a reinforcement learning agent with that signal and continue training that and generate a new policy and go and change the pink part of this prompt and call on llm again and go through this l so so in some sense this is this is the opposite of rlhs because I'm training an RL agent by calling an llm within the training Loop of the reinforcement orations and and by doing so I'm getting like signals I'm getting kind of like a reward regularizer or fullon report function you could use this in different ways you could use it to shape your Q function or you could use it to sh directly like VI your report function but in either way like it is actually like acting as a as a reward shaping strategy to get your policy to actually do the same thing and it turns out that in this negotiation setting and it actually like works really well so so we looked at a number of properties like versatility being pushover being competitive being stubborn and so on and across all of these properties these are properties where we have access to the ground truth reward uh what we could do is we could actually show that using a large language model acts as an okay proxy it actually matches the ground truth reward across like these these different settings and it out performs a supervised learning Baseline which honestly is not a fair fair Baseline because like how much data would you give your supervised learning agent uh but I think the more important point is that it is actually like very close to ground truth reward and also in settings where we don't have access to ground truth report we can run a user study and we can ask users well in that setting like is this agent that is using LM as a proxy reward is it matching your expectations and it turns out that in general people people like uh and and and you think that it matches the correct style that they were actually after okay so so that was all great that was a negotiation setting it's kind of expected an llm to be good at negotiations and assessing negotiations what does that mean for robotics right like like does that tell me how I could use an llm for robotics like it sounds a little questionable when when we are thinking about that and I think a big part of that is is the grounding problem right there's a question yeah sorry to interrupt um but was the L itself that you used did it involve an HF that had it wasn't yeah actually we this was a and I favorite so we actually played around with like gptj and gpt2 and like some of the earlier yeah giing bu yeah yeah assume not but but it is true that they today like you're using like more like tuned models than than you kind of like need to worry about that compounding Factor um yeah so so going back to the robotic setting there's a question of like do the same idea as kind of transfer into robotic setting and there a number of like followup works this work actually there's a work by Folks at Google um in collaboration Folks at Google where we looking at a very similar idea of of kind of like using a large language model as a reward designer and the idea here is that you could start with in this case like you were starting with language instructions with very high level language instructions so it's not about fairness or versatility or like these properties of negotiations it is really about starting with like a very high level semantically meaningful language like it is late in the afternoon make the robot face towards the sunset and then you can start with this and get your large language model to Output the weights of a reward function and then you can go and optimize that reward so in this work they're not doing RL they're actually doing model predictive control with that reort parallel model predictive control but it turns out that with some level of prom tuning which is important but with some level of prom tuning you can actually get the robot to do these types of behaviors or you could get it to sit down like a dog or lift the front paw one of my favorite examples was doing a moonwalk you can tell the robot do a moonwalk and and like it can actually generate the weights that correspond to doing a moonwalk uh and I think that is actually like pretty impressive um and then yeah you could look at a bunch of other set of tasks in simulation U but one point that I guess I want to mention here is grounding again right like a lot of these are in simulation where you have gr truth access to to this the state of inform or like the quadrate example it is not interacting with the world right it's just moving its paw uh so so it's it's much simpler but when it comes to like actually doing proper State estimation and interacting with the world it becomes like much more difficult to actually use these models to to Output correct reward functions or using like Vision language models with correct reward functions and I think like even like in the past like couple of months there has been like there have been a bunch of works for you using where they're using VMS to act as success detectors reward functions but in our experience in general it is hard to get like reliable results like with these models they're getting better so that is exciting but in in in general like I think at the moment it is a little questionable the results that one can get from them so so just to summarize some of the key takeaways here so so what I've talked about so far is this idea of learning human preferences like reward functions and one way of doing that maybe one more traditional way of doing that is by actively asking questions from humans like that is a way of tapping into informative human feedback um and and in addition to that like that's not the only way you could actually kind of like tap into the knowledge of large language models and try to leverage that knowledge uh and try to get at human preferences by really like asking an llm or asking a VM to give you give you some feedback and information cool so I do have a short section that I'm debating to skip honestly so I'm going to do that um let me do that because I think that might be easier let me skip this section real quick and kind of like come back up here hopefully that didn't mess up Zoom or anything oh I can reset screen sharing yes yeah yes share screen sure there we go okay all right so so we talked about this idea of learning human preferences and and that is great right and that kind of like helps us to tap into some other source of data Beyond demonstrations right like we can yes is it oh is it presenter viw stop sharing um where is share screen is this desktop 2 that is desktop to yeah sure is it good perfect okay all right so there you go yeah so uh so I'm talking about preferences we can pack into demonstrations but I think also another interesting source of data that we started seeing a little bit in this last section was the fact that we can tap into large free train models we can tap into LMS and that tends to be also a useful source of information about what it is that people want and and in practice right like if you look at the past couple of years like like I think you're all realizing that large language models are a thing now uh and I think a good question to ask is what that means for robotics right so so so like how are we going to go move forward knowing that um and and I think like in practice like I feel like there are two takes towards this and and I want to talk about both of these takes and the first take is this idea of this kind of Grand Vision of building something that resembles a large language model for Robotics and I think like I remember like at a time when gp3 came out very like we were all thinking about like what are the right ways of using gpt3 and and there were some like immediate ways of using gpt3 within robotics like as is but that didn't seem that exciting like the thing that was exciting was like what would be the analog of that for Robotics and I think it's a very Grand Vision it's a wonderful thing to try to shoot for um and and and under the stake what the idea is that instead of tapping into preference queries or demonstrations the question is can we tap into large offline data sets right like the thing the promise of this is that there there are large offline data sets out there and from these data sets we could try to train a Model pre-train A model and actually like tap into that information and be able to use that in Downstream settings and that would be really wonderful um and and and if you think about like a robotics Foundation model there's this question of like what are the right ways of looking at it so so if you take look at like a foundation model paper um the idea is that you have many different data sources maybe you have some amount of robotic interaction data we have human videos natural language simulation and I think an interesting question is if we have that data and that's big if but if we have that data what what is it that we are pre-training like what is the right representation that we should get from that data and and what does pine tuning look like like what does adaptation look like how could I use that model for Downstream tasks right like like if you look at like an LM right like you could use it for many different Downstream Tas Downstream tasks for in language and there's a question of like do we have many different Downstream tasks for robotics like or is it just like imitation like is it just control and I do think there are a number of like interesting Downstream task and Robotics and we should think about how we are using these models in Downstream settings and how we could we could think about fine toing so um so we started thinking about this and and and we started really looking at this from the perspective of learning visual representations and partly because we don't have anything else initially like we only have like human robot videos human videos initially to to tap into and there are many efforts that are trying to like collect large Rob robot data really train a model on robot data but at the moment let's say that we only have human videos you have let's say YouTube videos and there's a question of can we learned visual representations that are useful and if you think about visual representation learning right like rep there are kind of two extremes at the moment if you look at the field of vision and and what we have from the field of vision the two kind of like ends of the spectrum are things like mask Auto encoding where you take an image you mask it out and you try to reproduce the masked out image and that is really great because it gives you kind of like these local spatial features if you want to do I don't know if you want to grasp an object this is really good because it actually like gives you like all the details of the object you can actually hope for grasping with that model and it gives you like the syntax of of of what you're actually after but the problem with things like mascato encoding is that they destroy like all the semantics so let's say that you want to pick up a jar of I don't know orange juice versus a jar of milk both of them are P pouring like from a jar of liquid it you should like perform like the same task but you wouldn't really be able to like kind of get that similarity because the pixels look different and then kind of like we have the other end of the spectrum with models like clip or r3m where we are really trying to capture sematics you're using contrastive objectives to capture semantics we're trying to match language with images and and these are great because they kind of give us generalizable Concepts but the contrastive objective actually like turns out to destroy all the local and spatial features so it's really hard to expect like clip representations to like go and do flying grain grasping right like they wouldn't really be able to like capture any of that so what we were really after was could we get the best of both roles could we try to learn a visual representation that could actually like try to connect syntax and semantics and the idea that we had was maybe we could use language as a bridge between syntax and semantics so instead of just doing reconstruction without any language or instead of just captioning that just generates language what we could do is we could do grounded reconstruction could we could really start with a mass a incling backbone but we could condition on language so we don't lose the semantics and and this was a key idea behind this this model that that that they started training on human videos and in addition to just syntax and semantics we also need to like capture things like context and pragmatics in some sense right like if you look at a robotics tasks there was quite a bit of interaction and then Dynam Dynamic interaction that goes on and and we need to like capture those Dynamic interactions too so there's a question of how do we go after capturing Dynamics and pragmatics in addition to syntax and semantics and these were kind of like the three key factors for building this model that's called waltron uh it's a language driven representation learning model it's a collaboration with a number of folks s is kind of like the main person who has been leading this effort and then the idea of the Ultron model is starts with mascot encoding right like it's great to do mascot encoding right like like that gives us all the details that you're actually after start with a mascot encoding backbone and then in addition to that try to do like language captioning so so you could have let's say an image that's about pilling the carrot with a pillar try to do language captioning on top of that so so you have peling the carrot as a caption and that tries to get both syntax and semantics but in order to get that pragmatics that extra bit the idea is that you could also generate language so if you do language generation that tries to to kind of like get at like understanding of what the task really is is pulling a carriage with a peeler and then if you do multiframe conditioning like you're doing two-frame conditioning that also gets a little bit of Dynamics information so with all of these pieces together you could train a model that is framed on large scale like human videos and and that model is going to be more grounded than than the usual like suspects that you would have in this domain and then you could actually like find tun that model on a number of Downstream tasts I think we had like a evaluation so you have like five different tests um and then I'm just showing a couple of them here the thing that people care about at the end of the day usually is control and imitation learning in this case you're looking at language condition imitation learning these are the tasks that you're actually after um there's a robot here that does the tasks uh the performance is low the reason you don't see a video of it is that the performance is low for across like all of these models uh but uh basically training like taking this representation again it's a visual representation taking that representation and fine tuning it on 20 demonstrations you end up having this fullron model different versions of it shown in Orange that tend to outperform things like RM or things like mask Visual pre-training and I would argue that again the reason for that is it's more grounded and it tries to like capture things like semantics and and pragmatics a little more than like like the existing models I think one other interesting result that we had with the won model that I found pretty exciting is if you give it a video if you give it like a video of a person let's say an opening faucet and you just look at like what the representation outputs you could do like zero shot intent INF frence like the model is actually like pretty aligned like if you look at the representations it's pretty aligned in terms of figuring out when the faucet is being opened so so it actually like figures out the intent of the person from this video like zero shot without any any any fine tuning and I thought that was that was very interesting and the more interesting thing is if you give it a robot video even though it hasn't seen any robots like this is just trained on human videos it is able to do something similar it's able to do zero shot intent inference of robot videos with without any fine tuning and anything of that form and I think that's pretty exciting because again that shows that the model is a lot more grounded in terms of what we're actually after um so this is open source you could pip install waltron robotics anywhere you're calling a resonet you can call waltron uh please use it let us know how it goes um but but I think kind of the main point of like the section was that as you're training these large models and the thing that we were training here um was a visual representation right like we were trying to tap into large offline data sets by training A visual representation I think we could be careful about that pre-training objective and we could try to like kind of like shape that pre-training objective so it is actually useful for Downstream robotics Tas that we are interested in and in this case we were looking at language and multiframe conditioning really as the key differences for bringing syntax semantics and the Dynamics of the task together so that these representations are useful for robotics kind of like building on top of that idea I mentioned we don't have robot data right like the reason that we looked at won on human vide was we don't have robot data and I'm pretty excited about all these different efforts that are going on across multiple Labs uh in terms of collecting large offline data sets robot data sets that you could actually train these these these pre- train models on so so this is the RTX effort uh that that Sergey and Folks at Berkeley uh and Folks at Google and the number of labs have been have been kind of contributing to uh and and the idea here is really like trying to train a cross embodiment model trained on robots of many different embodiments many different skills many different data sets and have a single model that could actually like get all of these different diversity of robot data and then act as that Foundation model and then there's a question of like what should it output right like what I showed in Voltron was a visual representation it wasn't an action but this RTX model right like it could output an action right it could be a vision language action model and I think it's an interesting question in terms of like what level of abstraction do we want to be as we are train pre-training these models like is action the right representation is visual representation the right representation and how we should go about it and kind of like building on top of that there's also a number of efforts this is the R2D2 effort again it's a collaboration across multiple Labs sergey's lab Chelsea's lab number of labs outside of Stanford denly where we are trying to collect uh data uh on kind of like the same platform uh but like really diverse data uh in the wild in the wild meaning the dorms of students so this is one of the St dorms it's I think very common in the data set uh but you could actually get the robot and teleoperate it like kind of like out um like in in the wild and then try to train a model on on this type of data that's being collected and I think that is also really exciting when we thinking about training these Models All right so that was my first take this is kind of like and it's again very active area of research everyone is interested in do in these Foundation models I know I have five minutes but I think in the last five minutes I do want to briefly talk about the second take and the second take was something that I was very skeptical of initially and let me tell you what the second take is the second take is okay like Foundation models exist I don't want to go and build a robotics Foundation model that sounds like really difficult llms and VM exist and there's a question of can I use them in creative ways for robotics so so the idea is instead of tapping into preference queries and demonstrations or instead of tapping into large offline data sets like found robotics Foundation models try to do can I tap into existing knowledge of large language models and vision language models and the reason I was not that excited about this initially was it was like okay like what can I do with an llm but the initial efforts were things like using a large language model as a task plan right like if you look at like works like Sayan like initially I was like okay what the say can do Sayan tries to say Can is this work by Folks at Google what it tries to do is it tries to use a large language model to come up with task plans and a bunch of other things but like the key part of Satan is is that it's coming up the task lines and the initial reaction of like myself and a number of people was like okay was that the problem in robotics like did we really like suffer from task planning and I don't think that was I don't think that's the case but I don't think that was the point of Sayan either and and and like over time I'm becoming less and less skeptical of this idea of using llms and blms I think like thinking about large language models and vision language models kind of kind of opens up a number of other ways that we could think about robotics that we wouldn't think about before like if you look at works like code AS policies where you're using a large language model to generate robot code but I wasn't thinking of that as the approach for scaling up robot learning like two years ago but now I wonder if that is the right way to go and and I do think this take this take of using large models for various types of Downstream tasks kind of like opens up a number of interesting Downstream things that we could actually look into and then that is something that over the past year we have been looking at I'm not going to get into too DET too much details about any of them maybe aot like one of them like just briefly but just to kind of like pinpoint a bunch of them we talked about reward design already using LMS as reward designers like using DMS as reward designers I do think that's a very interesting use of existing LMS or VMS you could also fine-tune these large language models and vision language models to be more aligned to what you're after so some of the works that we are looking at is kind of like looking at fine-tuning VL lamps to be more physically grounded or to be more spatially grounded and then that is a way of getting them to be more aligned in terms of reward functions that you're actually after you could use them for things like Common Sense reasoning so so for example you can ask a BLM like should I should I clean up the Legos on the right or should I clean up the Legos on the left so so for example if you have a scene like this a human immediately knows that you shouldn't clean this table I spend a lot of time like building these Legos right uh but a human would also know that it's okay to clean this table and like for the longest this was one of the problems that value alignment people were interested in right like this Common Sense reasoning how do you get a robot or II agent to have the same knowledge that is kind of solved now with vision language models and LMS if you take a picture of these two and kind of like describing to a large language model it knows the answer it kind of like understands that you should not throw away these types of Legos and and that is kind of interesting that we can do a lot of like Common Sense reasoning and social reasoning now just by tapping into the knowledge of large language models and vision language models um we could look at other things like semantic manipulation like referring to objects like Parts like laces or heels and again like I could I can do semantic manipulation using LMS along with training a very simple keyo based model that could actually respond to that you could look at teaching humans this is something that we were discussing earlier um with you guys like this idea of like using yellow LMS to actually teach humans give corrective feed back to humans when you're looking at exercises and then maybe the last thing that I want to spend like two slides on is is kind of like going Beyond some of these applications and using llms as pattern recognition machines so every application that I've set so far is really leveraging large language models and vision language models because they have a lot of context they have Rich context they have access to Internet scale data this allows us to tap into internet scale data that is great right like we could we could tap into internet scale data we can tap into social reasoning semantic reasoning Common Sense reasoning but I think one interesting observation that we recently had is you could even go beyond that like using LMS and blms actually allows us to go beyond just semantics and context and specifically these models can simply act as really good pattern machines they can like just find very abstract patterns not even not even like semantically meaningful patterns and this collaboration again like we folks said Google seir has been leading this uh this work but the idea is I'm just going to show three examples and end the talk there um like one example is that you could do sequence transformation so you could take an image this is an image and and if you have this image and then you have an input output right like the the red cup goes on the on the green green um plate you could have a test example and like you know what the output should be the output should be that the red cup should go on the green green Green plate okay so that is what be asking I have access to a large language model not a vision language model for a second what I could do is I could discretize this once I discretize it I could put it in numbers I could put these numbers in the context of a large language model input output input and then the llm is actually going to Output a set of numbers that I if I deproject it back to high resolution it ends up being the thing that I'm actually after I'm not proposing to use an llm for this task or use an LM to solve Vision but it's actually like pretty interesting that I can get these types of patterns and again it's like absolutely like token invariant like the tokens have have no meanings but because there are patterns it's able to capture those patterns and kind of like predict what goes next from that pattern it can continue patterns you can simply give it like the XY location of a sign W and it can continue that xire location so so we could try this out try this out tonight right like like give chat GPT like xire location of any sign if you want and it's actually like able to like continue that behavior and that could be useful for things like I don't know Robotics and data collection so maybe that is a stretch but I think it's interesting that you could do this you could give the end Defector location of a robot XYZ the op control of that end Defector and that is the thing that I'm putting in the context of the llm and you could give like that motion right like that is the thing that I would put in context XYZ opol XYZ opol and so on and then the robot is able to continue that the robot is actually able to like put out a control that actually continues that behavior and and I think that's kind of cool and and I think the most interesting one is that it could do some level of optimization so so for example if you pick up the inverted pendulum problem and and you give it again like the the the the the kind of like control for the inverted pendulum along with the reward it's able to stabilize that you might say Okay inverted pendulum like exists on internet so it probably has quite a bit of knowledge that is true but again but the thing that we are inputting to the llm is just the kind of like the coordinates of the end Defector and the reort like in this case I'm looking at a robot trying to reach a cup and I'm sorting what goes in the context with the reward so I'll put the reward and the trajectory reward and the trajectory and I sort that out and the robot is actually able to continue the pattern and output High reward trajectories which is also kind of cool um so and and and that kind of like resembled things like clicker training right you could actually do clicker training with the robot so let's say that you provide a clicker as it goes closer and closer to the object that is the thing that gives it high reward and that is the thing that you're putting in in the context of the LM and eventually it's able to like reach the object all right so let me just like end here so so so kind of like the key takeaway of this last part is that like LMS and DMS they're wonderful they're kind of like two takes in terms of how we should think about it for robotics there's I think a grand vision of trying to build something similar to that that's wonderful data is missing what to pre-train on is still like a question that we should be thinking about but there's also the second view which is maybe you could tap into existing LMS and DMS because they have a lot of context they could do social reasoning they could do semantic manipulation they could teach humans they could tap into internet scale data and that is really wonderful but even beyond that they can act as pattern machines they can find patterns and and that is kind of surprising again I'm not proposing using that for vision or control or anything but that is surprising and that maybe tells us how we should go and maybe continue I don't know fine-tuning these models on patterns or what are some future applications that we could actually use uh when we are when we are thinking about these large large pre- train models for for the applications of Robotics I didn't talk about feeding uh that is something we are looking at I'm just going to with some videos here we are actively looking at this we are using some learning based models that are um trying to uh pick up various types of food items um like throw in spaghetti actually they're using an LM to decide like you're doing Shan on this trying to like decide like what V of forood to pick up and you have like newball and spaghetti and the goal is to get any door Dash noodles and be able to feed people oh and then we feeding people uh show they feeding people um yeah and this requires a lot of good engineering too so it's not just a learn policy that does that uh there's a reactive controller that has different levels of reactivity but it enters a math and exits um I think it's better than the first video I showed some argue the for goes in a little too far um yeah so with that I'm just gonna end it here take any questions all right I think we have a few minutes for questions any any questions for the for example that few shot so you provide some few shot examples so for example the initial examples that you provide is that there's a robot on ground and like it's not like moving SP it's just like standing on the ground and here is the reward function for standing straight and that reward function has a number of parameters has a number of weights and at that point you can say move your p and it's kind of like generates the behavior like it figures out like what what is the weight that corresponds to moving your PO and you could and and you could run this par NPC so you could kind of like see what behavior is actually working and if it doesn't get that zero shot it could interactively like make it better and better so in that work you're also looking at corrective language and improving that behavior like over an options I think actually the moonwalk example that I was giving that was over like a number of interactions I don't think that was Zero any other questions very very nice I like perspective it sounds a lot like yeah yeah that's that's very true it's very similar to like the new word of our um yeah so so we are looking at you're looking at a bunch of Works around like using RF uh for for robotics specifically like I think the one difference in RF is that a lot of these preference based learning work and literature around that did not care as much about rewards so like kind of what the work that people usually side for like all is crisano is like um Europe 17 work and the reason is that that work was CH work that was trying to use a newal report and didn't really care about like a mathematical model of uncertainty or anything like that it was just using an ensemble model to to capture uncertainty and that is really like where things are right now um there are a lot of parallels there's a question of like how much Active Learning again matter like in these settings you i' argue that in the robotic settings and sequential decision making settings it does matter more um we have actually has some work there he's trying to like translate RF problem to contrastive learning and reduce card of it it's in Rafael um yeah so I think there's we looking at that I think you mentioned in your earlier work you assumed a noisy rational model of human is that included at all with your with the L with the LMS you assume that could be super noisy rational we were getting an input from so the input that you were getting from the human like hum are nois so I should trust thing that they're telling us so are you saying like reward shaping does the model account for possibil not um it's not and you could do that you're not doing that right now um yeah so so I mean like it is not like since the model is continuing training right and it has like a separate in addition to the so so the report is not the only report that it's using it's just using that as a regularizer it doesn't hurt it so much uh but you could potentially use kind of like explain your examples Rel ined learning like giving the some Behavior they are context learning they're exactly in context learning and I guess the point that I was trying to make here is that it doesn't need to be meaningful semantically meaningful tokens which is kind of surprising because like I think a lot of what in context Zing work like really like attempts to like give you tokens that are semantically meaningful or actions that are semantically meaningful and it is true that if you have semantically meaningful actions for example for the inverted pendulum if you say left I don't know twice left third left if you actually like give like language to what the actions that you're taking it converges faster but if you give it like any token it can identify the pattern too so so I think the point the interesting and surprising point there was it is token invariant and it's the patterns that it is picking up rather than the fact that there's some semantics like on Internet that it like understands now all right let's give G another round of applause |
CS_285_Deep_RL_2023 | CS_285_Lecture_5_Part_5.txt | in the next portion of today's lecture we're going to talk about implementing policy gradients in practice in d-parallel algorithms one of the main challenges with implementing policy gradients is that we would like to implement them in such a way that automatic differentiation tools like tensorflow or pi torch can calculate the policy gradient for us with reasonable computational and memory requirements if we want to implement policy gradients naively we could simply calculate grad log pi a i t given s i t for every single state action tuple that we sampled however typically this is very inefficient because neural networks can have a very large number of parameters in fact the number of parameters is usually much larger than the number of samples that we've produced so let's say that we have n parameters where n might be on the order of a million and we have 100 trajectories each with 100 time steps so we have 10 000 total state action pairs which means that we're going to need to calculate 10 000 of these 1 million length vectors that's going to be very very expensive in terms of memory storage and also computationally typically when we want to calculate derivatives to neural networks efficiently we want to utilize the back propagation algorithm so instead of calculating the derivative of the neural net's output with respect to its input and then multiplying that by the derivative of the loss we do the opposite we first calculate the derivative of the loss and then back propagated through the neural network using the back propagation algorithm which is what our automatic differentiation tools will do for us in order to do that we need to set up a graph such that the derivative of that graph gives us the policy gradient all right so how do we compute policy gradients with automatic differentiation well we need a graph such that its gradient is the policy gradient the way that we're going to figure this out is by starting with the gradients that we already know how to compute which are maximum likelihood gradients so if we want to compute maximum likelihood gradients then what we would do is we would implement the maximum likelihood objective using something like a cross entropy loss and then call dot backward or dot gradients on it depending on your automatic refreshing package and obtain your gradients so the way that we're going to implement policy gradients to get our auditive package to calculate them efficiently is by implementing a kind of pseudo loss as a weighted maximum likelihood so instead of implementing j maximum likelihood we'll implement this thing called j tilde which will just be the sum of the log probabilities of all of our sampled actions multiplied by the rewards to go q hat now critically this equation is not the reinforcement learning objective in fact this equation is not anything it's just a quantity chosen such that its derivatives come out to be the policy gradient of course a critical portion of this is that our automatic differentiation package doesn't realize that those q hat numbers are themselves affected by our policy so it's just uh dealing with the graph that we provided it so in a sense we're almost trying to trick our auditive package into giving us the grading that we want okay so here log pi is uh you know would be for example our cross entropy loss if we have discrete actions or squared error if we have normally distributed continuous actions all right so i have some pseudo code here this pseudocode is actually in tensorflow because i taught the class in tensorflow in past years uh you're going to be doing the the policy gradients of simon pie torch the basic idea is very much the same it's just the particular terminology is going to be a little different but hopefully the pseudo code is still straightforward for everyone to parse so the pseudo code that i have here is the pseudo code for maximum likelihood learning this is supervised learning here actions is a tensor with dimensionality n times t along the first dimension so number of samples times the number of time steps and the dimensionality of the action along the second dimension and states as a tensor n times t times the number of state dimensions so the first line logics equals policy.predictions states that simply asks the policy network to make predictions for those states basically output the logits over the actions as a discrete action example then the second line negative likelihoods basically uses the softmax cross entropy function to produce likelihoods for all the actions and then we do a mean reduce on those and calculate their gradients so this will give you the gradient of the likelihood this is what you do for supervised learning to implement policy gradients you just have to put in weights to get an a weighted uh likelihood and those weights correspond to those reward to go values so i'm going to assume that the reward to go values are all packed into a tensor called q underscore values which is an n times t by 1 tensor and then after i calculate my likelihoods i'll turn them into weighted likelihoods by pointwise multiplying them by the q values and that's the only change that i make then i mean reduce those and then i call their gradients so this will essentially trick your auditive package into calculating a policy gradient so in in math what we've implemented is this we've basically turned our maximum likelihood loss into this modified pseudo-loss j tilde where we weight our likelihoods by q hats and of course it's up to you to actually implement some code to compute those q values which you could do simply in numpy you don't really need to use your auditive package to compute those all right a few general tips about using policy gradients in practice first remember that the policy gradient has high variance so even though the implementation looks a lot like supervised learning it's going to behave very differently from supervised learning the high variance of the policy gradient will make something quite a bit harder it means your gradients will be very noisy which means that you probably need to use larger batches probably much larger than what you're used to for supervised learning so batch sizes in the thousands or tens of thousands are fairly typical tweaking the learning rate is going to be substantially harder adaptive step size rules like atom can be okay-ish but just regular sgd with momentum can be extremely hard to use we'll learn about policy grading specific learning rate adjustment methods later when we talk about things like natural gradient but for now using atom is a good starting point and in general just expect to have to do more hyper parameter tuning than you've usually had to do for supervised learning so just to review we uh talked about how the policy grade is on policy how we can derive an off policy variant using important sampling which unfortunately has exponential scaling in the time horizon but we can ignore the state portion which gives us an approximation we talked about how we can implement policy gradients with automatic differentiation uh and the key to doing that is setting it up so that audit back propagates things for us properly by using the pseudo loss and we talked about some practical considerations batch size learning rates and optimizers |
CS_285_Deep_RL_2023 | CS_285_Lecture_4_Part_6.txt | in the last portion of today's lecture i'm going to just briefly go through a few examples of actual dprl algorithms just to kind of show you some of the things that they do this will be the least technical portion of the lecture and these algorithms will be covered in much more detail in the subsequent lectures whereas this part is mainly just to kind of round out today's lecture with some nice interesting examples and videos so some examples of specific algorithms and don't worry if you haven't heard these names we'll cover these more later value function fitting methods so these are things like q learning dqn temporal difference learning these are all value function methods fitted value iteration policy gradient methods these are methods like reinforce natural gradient trust fusion policy optimization or trpo ppo etc ectocritic algorithms either things like asynchronous advantage actor critic or a3c soft vector critic ddpg and so on model based rl algorithms these are things like dinah guided policy search mppo svg etc and we'll learn about most of these in the next few weeks but first let's go through a few examples so here's a video of uh the uh q-learning result for playing atari games uh this is a an algorithm that learns policies for playing video games directly from pixels this is from paper by neil from 2013 and this particular algorithm uses q learning with convolutional neural networks so q learning is a value based method it actually learns an estimate of qsa by using a neural network atari games are discrete action environments which means that you just have to produce a different q value for each of a small discrete set of actions and then you take the r max over those q values to select the best action when playing the game here is a a robotics example uh this is a from the paper end-to-end training of deep visual motor policies and this is a model-based rl algorithm called guided policy search which uses a combination of dynamics models and image-based complemental networks to perform a variety of robotic skills here's a policy gradients example this is from a paper high-dimensional continuous control with generalized advantage estimation it uses a variant of an algorithm called trust region policy optimization which is a policy grading methods method that combines uh in this case a trust region with value function approximation so this is technically an ectocritic algorithm derived from policy gradient algorithm and here you can see it training this little humanoid robot how to walk and here is the video that i actually showed in the first lecture for the grasping robot and this particular result was actually also produced by a q-learning algorithm not that different from the atari example that i showed a few slides ago but in this case with a particular modification to handle continuous actions |
CS_285_Deep_RL_2023 | CS_285_Lecture_4_Part_5.txt | so why do we have so many different rl algorithms why is it that we can't just teach you one rl algorithm uh in a couple lectures and be done with it why do we need an entire course well these are algorithms have a number of trade-offs that will determine which one works best for you in your particular situation so one important trade-off between different algorithms and we'll touch on this as we go through the next few lectures is sample efficiency meaning when you when you execute the stuff in this orange box when you generate samples in the environment how many samples will you need before you can get a good policy another trade-off is stability and ease of use reinforcement learning algorithms can be quite complex they require trading off a number of different parameters how you collect samples how you explore how you fit your model how you fit your value function how you update your policy each of these trade-offs and each of these choices often introduce additional hyper parameters which can sometimes be difficult to select for your particular problem different methods will also have different assumptions for example do they handle stochastic environments or can they only handle deterministic environments do they handle continuous states and actions can they only handle discrete actions or can they only handle discrete states do they handle episodic problems meaning problems with a fixed capital t horizon or do they handle infinite horizon problems where t goes to infinity or both and different things are easy or hard in different settings for example in some settings it might be easier to represent a policy even if the physics of the environment are very very complex while in other settings it might be easier to learn a model than it is to learn the policy directly each of these trade-offs will involve making some set of design choices for instance you might opt for an algorithm that is not very sample efficient for the sake of having something that is easier to use or maybe for the sake of having something that can handle stochastic and partially observed problems or you might opt for a very efficient algorithm because you your samples are very expensive but then be willing to accommodate some other limitations like for example only allowing for discrete actions so typically we have to make these trade-offs depending on the particular problem that we're facing let's talk about sample efficiency first because that's a pretty big one so sample efficiency refers to how many samples we need to obtain a good policy basically how many times do we have to sample from our policy until we can make it perform well that's the orange box one of the most important questions in determining the sample efficiency of an algorithm is whether the algorithm is what's called an off policy algorithm or not an off policy algorithm is an algorithm that can improve the policy by using previously collected samples an onpolicy algorithm has to throw out all of its samples each time the policy changes even a little bit and generate new samples for this reason on policy algorithms can be a lot less efficient for instance a policy grading algorithm which is a non-policy algorithm must collect new samples each time it takes a grading step on the policy because each time the policy changes even a little bit new samples must be collected so in general if we uh want to look at a at a kind of spectrum of with the more efficient algorithms on the left and less sufficient algorithms on the right a major dividing line on the spectrum is whether it's an on policy or an off policy algorithm where on the extreme end of less efficient algorithms will be things like evolutionary or gradient free methods then on policy policy gradient algorithms then actor critic style methods which can be either on policy or off policy then purely off policy methods like q learning then maybe model based dprl methods model based shell rl methods and so on but then we could say well why would we ever want to use a less efficient algorithm so it seems like we should just go with the stuff on the left end of the spectrum well it's because the other trade-offs uh might not be in our favor as we move to the left for example wall clock time the amount of computation the algorithm needs is not the same as sample efficiency so maybe generating samples for your application is actually very cheap maybe you're using a very very fast simulator for example if you're learning how to play a game like chess simulating chess is very very fast so most of your computation time will go into updating your value functions models and policies in that case you probably don't care nearly as much about sample efficiency and interestingly enough the wall clock time for these algorithms is often flipped so if your simulation is very cheap you might actually find the stuff on the right end of the spectrum to be computationally less expensive and the stuff on the left side of the spectrum to be computationally much more expensive stability and ease of use when it comes to stability and ease of use we might ask questions like does our algorithm converge meaning if we run it long enough is it guaranteed to eventually converge to a fixed solution or will it keep oscillating or diverging and if it does converge what does it converge to does it converge to a local optimum of the rl objective or a local optimum of any other well-defined objective and does it converge every time coming from a optimization or supervised learning background you might wonder at this point why is any of this even a question because typically when we deal with supervised learning or kind of well-defined especially convex optimization methods essentially we only care about nasa converge in reinforcement convergent algorithms are actually a rare luxury and many methods that we use in practice are not guaranteed to converge in general so the reason for this is that reinforcement learning often is not pure gradient descent or pure gradient s sound many reinforcement algorithms are actually fixed point algorithms that only carry convergence guarantees under very simplified tabular discrete state assumptions which often do not hold in practice and in theory the convergence of many of the most popular rl algorithms such as q learning algorithms is actually an open problem so q learning is a fixed point iteration model based reinforcement learning is a kind of a peculiar case because the model is not actually optimized with respect to the rl objective the model is optimized to be an accurate model the model training itself is convergent but there's no guarantee that getting a better model will actually result in a better reward value policy gradient is gradient descent or technical gradient ascent but also the least efficient of the bunch value function fitting is a fixed point iteration and at best it minimizes error fit and minimizes what's called bellman error meaning is your value function predicting values accurately but that's not the same as saying does your value function produce a policy with good rewards and at worst value function fitting doesn't even minimize the bellman era or student actually might even diverge many popular dprl value fitting algorithms are not guaranteed to converge to anything in the non-linear case in the case where you use neural networks model based rl the model minimizes error fit which will definitely converge to a good model but there's no guarantee that a good model will lead to a better policy policy gradient is the only one that actually performs grading ascent on the true objective but as i said it's the least efficient of the punch assumptions one common assumption that many rl algorithms will make is full observability meaning that you have access to states rather than observations or but another way the thing that you're observing satisfies the markov property so no cars driving in front of cheetahs this is generally assumed by most value functional fitting methods it can be mitigated by adding things like recurrence and memory but in general can be a challenge another common assumption this one is common with policy gradient methods is episodic learning so here's a robot performing episodic learning you can see that it makes a trial then resets and then makes another trial so this ability to reset and try again repeatedly is often assumed by pure policy gradient methods and although it's not technically assumed by most value-based methods they tend to work best when this assumption is satisfied it's also assumed by some model-based sterile algorithms another common assumption very common in model based methods especially is continuity or smoothness this is assumed by some continuous value function learning methods and it's often assumed by model based rl methods derived from optimal control which really require continuity or smoothness to work well so as we cover various rl algorithms over the next few weeks i'll point out some of these assumptions as we go but keep in mind that many of these methods will differ in the kinds of assumptions they make and also how rigidly these assumptions must be satisfied in order for those methods to work well in practice |
CS_285_Deep_RL_2023 | CS_285_Lecture_6_Part_2.txt | all right now that we've talked about policy evaluation and how value functions can be incorporated into the policy gradient let's put these pieces together and construct an actor credit reinforcement learning algorithm so a basic batch actor critic algorithm can look something like this this is a kind of based on the reinforce procedure before with some additional steps added so step one just like before is going to be to generate samples by running rollouts through our policy that's basically the orange box and that remains essentially unchanged step two is to fit our approximate value function to those sampled rewards and that's what's going to replace the green box so instead of just naively summing up all the rewards we're not going to fit a neural network as we discussed in the previous section step 3 for every state action tuple that we sampled evaluate the approximate advantage as the reward plus the approximate value of the next state minus the value of the current state step four use these advantage values to construct a policy gradient estimator by taking grad log pi at every time step and multiplying it by the approximate advantage and then step five like before is to take a gradient descent step so the part that uh that we talked about when we discuss policy evaluation is mostly the step two how do we actually fit the value function and we talked about how we can make a number of different choices we could fit it to single sample monte carlo estimates meaning that we actually sum up the rewards that we got along that trajectory and that gives us our target values we also talked about how we could use bootstrap estimates where we use the actual observed reward plus the estimated value at the next state by using our previous value function estimator and this gives us a few different options for how to fit the critic now at this point i want to make a little aside to discuss what happens when we fit value functions with this bootstrap rule in infinite horizon settings so the trouble that we might get into is if the episode length is infinite then each time we apply this bootstrap rule our value function will increase so if we have for example an episodic task that ends at a fixed time maybe this is not such a big issue perhaps we could have a different value function for every time step and everything is finite horizon episodic tasks are fairly common in some settings like this robotics task here but we could have an infinite horizon a continuous or cyclic task like this running task or we might simply want to use the same value function for all time steps in these cases using a bootstrap rule the way that i discussed is liable to lead to some problems for example if the rewards are always positive each time you bootstrap in this way your value function increases and eventually your value function might become infinite so how can we modify this rule to ensure that we can always have finite values and that we can handle infinite horizon settings well one very simple trick is to assume that we want a larger award sooner rather than later this is very natural if you imagine that i were to tell you that i'll give you 100 you might be quite pleased about that if i tell you that i'll give you 100 next year you'll probably still be somewhat pleased but less so if i tell you that i'll give you 100 in a million years you probably won't take me very seriously why well because uh it matters a lot less to you what will happen in one year and significantly less will happen in a million years simply because there's so much uncertainty about what will happen to everybody including you in that amount of time that those delayed rewards just don't have as much value another way of thinking about it is that you'd prefer rewards sooner rather than later for a very uh basic biological reason which is that someday you're going to die and you'd like to receive the reward before you die and if i tell you to get the reward in a million years then it's very unlikely that you'll get the reward before you die that might sound kind of grim but we can use this uh cute metaphor to actually construct a solution to this infinite reward problem we can favor a word sooner rather than later by actually modeling the fact that the agent might quote-unquote die so the way that we're going to do this is we will introduce a little multiplier in front of the uh value so instead of setting the target value to be r plus the next v we'll set it to be r plus the next v times gamma where gamma is what we call a discount factor it's a number between 0 and 1. 0.99 works really well if you want an example of a discount factor generally would choose them to be somewhere between like 0.9 and 0.999 and one way that you can interpret the role of gamma is that gamma kind of changes your mdp so let's say that you have this mdp where you have four states and you can transition between those four states and those transitions are governed by some probability distribution p of s prime given s a when we add gamma one way we can think about this is that we're adding a fifth state a death state and we have a probability of one minus gamma of transitioning to that death state at every time step once we enter the death state we never leave so there's no resurrection in this mdp and our reward is always zero so that means that the expected value for the next time step will always be expressed as gamma times its expected value in the original mdp plus one minus gamma times zero and that's where we get this gamma factor so the probability of entering the death state is one minus gamma uh which means that the modified dynamics now are just gamma times the original probabilities and that one minus gamma remaining slice accounts for entering the death state so mechanically the modification that we have with the discount factors just multiply our values by gamma for every time step that we back them up what this does is it makes us prefer rewards rewards that happen sooner rather than later and mathematically one way that we can interpret this is that we're actually modifying our mdp to introduce the probability of death with probability of one minus gamma all right let's dig into discount factors a little bit more first could we introduce discount factors into regular monte carlo policy gradients well the answer is we most definitely can so for example there's one option for how to do this is we can just take that single sample reward to go calculation and we can put gamma into it so the equation i have here is exactly the reward to go calculation had before except that now i've added this gamma raised to the power t prime minus t in front of my reward so that means that the first reward the one that happens at time step t has a multiplier of one the next one at t plus one gets a multiplier of gamma the next one t plus 2 gets a multiplier of gamma squared and so on so we're much more affected by rewards that happen closer to us in time this type of estimator is essentially the single sample version of what you would get if you were to use the value function with a discounted bootstrap there is another way that we can introduce a discount into the monte carlo policy grading which seems like it's very similar but has a subtle and important difference what if we uh take the original uh monte carlo policy grain that we had before we did that causality trick where we just sum together the grad log pies and then multiply to get them together with the sum of the rewards and then we're going to put a discount into that we'll put a gamma to the t minus one multiplier in front of the reward so that the reward of the first time step is multiplied by one the reward of the second time step is multiplied by gamma the reward of the third time step is multiplied by gamma squared and so on take a moment to think about how these two options compare consider if these two options are actually identical mathematically or not so if we were to apply the causality trick to option 2 meaning that we remove all of the rewards from the past will we end up with option one or not so we'll come back to this question shortly but to help us think about how these options compare let's write out just for completeness what we would get if we had a critic so with a critic this is the grading that we would get we have the current reward plus gamma times the next value minus the current value and that's our approximate advantage so option one and option two are not the same in fact option one matches the critic version uh with the exception that we have a single sample estimator option two does not in fact if we were to rewrite option two by using the causality trick where we distribute the rewards inside the sum over gradle pies and then eliminate all the rewards that happen before the current time step we'll end up with this expression we'll end up with grad log pi times def t times the sum from t prime equals t to capital t of gamma to the t prime minus one whereas before we had gamma to the t prime minus t so what's going on here why do we have this difference well one way that we can understand this difference is if we take the gamma to the t minus 1 factor and distribute it out of the sum so the the last line i have here is exactly equal to the preceding line i just distributed out a gamma to the t minus 1 factor so now the reward to go calculation is exactly the same as option 1 but i have this additional multiplier of gamma to the t minus 1 in front of grad log pi so what is that doing well what that's doing is actually quite natural it's saying that because you have this discount not only do you care less about rewards further in the future you also care less about decisions further in the future so if you're starting at time step one rewards in the future matter less but also your decisions matter less because your decisions further in the future will only influence future rewards so as a result you actually discount your gradient at every time step by gamma to the t minus one essentially it means that making the right decision at the first time step is more important than making the right decision at the second time step because the second time step will not influence the first time steps reward and that's what that gamma to the t minus 1 factor out front represents this is in fact the right thing to do if you truly want to solve a discounted problem if you are really in a setting where you have a discount factor and that discount factor represents your preference for near-term rewards or equivalently the probability of entering the death state then in fact your policy gradient should discount future gradients because in a truly discounted setting making the right decision now is more important than making the right decision later coming back to my uh analogy about the 100 if i tell you that i will give you 100 if you pass my math exam and i tell you the same thing that i can give you the exam in today or i can give you the exam next year or i can give the exam in a million years well chances are if you know that i'm going to give you the exam in a million years you're probably not going to study for it so your policy gradient for that math exam will have a very small multiplier because you'd rather deal with things that will give you rewards much nearer to the present so it makes sense to have this gamma to the t minus 1 term out front if we're really solving a discounted problem but in reality this is often not quite what we want so saying that later time steps matter less might not actually give us the solution that we're after so this is the the death version later steps don't matter if you're dead it's all mathematically consistent the version that we actually usually use is option one why is that well take a moment to think about that why would we prefer to use option one instead of option two so if we think about this cyclic continuous rl task that i presented before where the goal is to make this character run as far as possible while we can model this as a task with discounted reward in reality we really do want this this guy to run as far as possible ideally infinitely far so we don't really want the discounted problem what we want to do is we want to use the discount to help get us finite values so that we can actually do rl but then what we'd really like to do is get a solution that works uh for you know for running for arbitrarily uh long periods of time so option one in some ways is closer to what we want maybe what we really want is more like average reward so we want to put a one over capital t and remove the discount altogether average reward is computationally algorithmically very very difficult to use so we would use discount in practice because it's so mathematically convenient but omit the gamma the t minus one multiplier that shows up in option two because we really do want a policy that does the right thing at every time step not just in the early time steps another way to think about the role that the discount factor plays that provides an alternative perspective to this death state um and you can read about this uh in the in this paper by philip thomas called bias and natural electrocritic algorithms is that the discount factor serves to reduce the variance of your policy gradient so if you have infinitely large rewards you also have infinitely large variances right because infinitely large values have infinite variances by ensuring that your reward sums are finite by putting a discount from them you're also reducing variance at the cost of introducing bias by not accounting for all those rewards in the future all right so uh what happens when we introduce the discount into our ecto critic algorithm well the only thing that changes is step three so in step three you can see that we've added a gamma in front of v pi phi s prime everything else stays exactly the same one of the things we can do with actual critic algorithms once we take them into the infinite horizon setting is we can actually derive a fully online actor critic method so so far when we talked about policy gradients we always used policy gradients in a kind of episodic batch mode setting where we collect a batch of trajectories each trajectory runs all the way to the end and then we use that batch to evaluate our gradient update our policy but we could also have an online version when we use octocritic where every single single time step every time we step the simulator or we step in the real world we also update our policy and here's what an online active critic algorithm would look like we would take an action a sampled from pi theta a given s and get a transition s comma a comma s prime comma r so we take one time step and at this point i'm not putting t subscripts on anything because this can go on in a single infinitely long non-episodic process step two we update our value function by using the reward plus the value of the next state as our target because we're using a bootstrapped update we don't actually need to know what state we'll get at the following types of or the one after that or the one after that we just need the one next time step s prime so we don't need s double prime s triple prime etc because we're using the boost trap so that's enough for us to update our value function step three we evaluate the advantage as the reward plus the value function of the next state minus the value function of the current state again this only uses things that we already know it uses s a s prime r and our learned value function and then using this we can construct an estimate for the policy gradient by simply taking the grad log pi for this action that we just took multiplied by the advantage that we just calculated and then we can update the policy parameters with policy gradient and then repeat this process and we do this every single time step now there are a few problems with this recipe when we try to do deep rl and maybe uh each of you could take a moment to think about what might go wrong with this algorithm if we implemented in practice this is kind of the textbook online action critic algorithm but for d byron it's a bit problematic all right let's uh continue this in the next section |
CS_285_Deep_RL_2023 | CS_285_Lecture_13_Part_1.txt | all right welcome to lecture 13 of cs285 today we're going to talk about exploration today's lecture is going to be a little bit on the longer side but to make up for it the next lecture which is going to be part 2 of exploration will be quite a bit shorter so if this lecture feels like it's going on for a while we're going to give you a little bit of a break for wednesday's lecture where it won't be quite as long all right let's get started so what's the problem that we're going to talk about today well the problem can be illustrated with an example like this if you're working on homework 3 if you're finishing that up now you might have tried a few different atari games some of these attire games are actually pretty easy so if you want to play pong or breakout mostly your uh your homework 3 q learning implementation will probably work pretty well in those tasks but some other atari games are actually quite a bit harder so this game for example is almost impossible if you try to run this called montezuma's revenge if you try to run your q learning implementation on this game you'll probably find that it doesn't get very far so why is that why is the game on the right so much harder than the game on the left well it's not because the game itself is necessarily harder for a person playing montezuma's revenge you know i've played it myself i don't think it's a very good game but it's not a particularly difficult one in fact getting that trick shot in breakout where it bounces around up top is probably harder actually than playing montezuma's revenge but it's very hard for an rl agent to play this game so in montezuma's revenge if uh the goal is to traverse this pyramid it's made up of multiple different rooms and each room has a different challenge so in this first room there's a skull that bounces around that kills you if you step on it and you have to go fetch the ski and then open one of the doors at the top now we understand some of these things we understand that the key is a good thing that keys open doors we might not know what exactly the skull is supposed to do but we kind of know that skulls are probably not good things and touching the skull is probably not a good idea now in the game you get a reward for getting the key um you also get a reward for opening the door getting killed by the skull actually doesn't do anything so you lose a life but you don't actually get a negative reward for that if you lose all your lives then you start over that's also not obvious whether that's good or bad because when you start over you might get another opportunity to pick up the key and maybe that's good because then you get the reward for the key again so the reward structure of the game doesn't really guide you each step of the way and while we know ourselves that some of these things are good or bad the agent really doesn't and the agent might figure out that a good way to keep getting reward is to get keep getting killed by the skull so they can pick up the key again instead of moving on to the next room the trouble is that finishing the game only weakly correlates with rewarding events it's not that you get little pieces of reward when you're on the right track and negative reward when you're on the wrong track so we know what to do because we understand what all these little sprites and pictures mean but the rl algorithm has to figure it out through trial and error to try to understand kind of how the algorithm feels when trying to play one of these games let's think of a different example an example that's a lot less intuitive for humans so there's a card game called mao it's also similar in principle to a game called calvin ball the idea is that the only rule you may be told is this one so when you start playing the game you just don't know the rules of the game and one of the players who's the chairman can call you out for not following a rule but they don't explain the rule to you they just tell you that you incur a penalty for failing to follow a rule and you can only discover the rules through trial and error and then this makes the game very frustrating and quite demanding so even though the rules might be fairly simple because you don't know those rules you discover them through trial and error the game ends up being very very challenging and the rules don't always make sense to you so the the whole point of this game is for other players to make up rules that are kind of weird and counterintuitive so temporally extended tasks like montezuma's revenge or the game mao can become increasingly difficult based on how extended the task is and how little you know about the rules essentially even seemingly simple tasks where you don't know the rules and you have to discover them through trial and error as a result of poorly shaped rewards can prove to be exceptionally challenging and imagining this a step further imagine that your goal in life was to win 50 games of mammal so you're just going about your day you know you can go to class you can do your homework but if you happen to win 50 games of now you're gonna get a million dollars now you're pretty unlikely to just sort of randomly go and do this so this is essentially the exploration problem the exploration problem relates to this setting where you have temporally delayed rewards where the structure of the task doesn't really tell you what are the things you need to do to get larger rewards in the future all right here's another example that looks very different at first but actually describes kind of a similar type of problem so this is a continuous control task so here this robotic hand is supposed to pick up a ball and move it to this location now this is also a difficult exploration problem because in order to figure out uh how to get reward by putting the object in the right place the hand needs to essentially wiggle the joints on the fingers randomly and again just like a priori we don't understand the rules uh of the game now here the hand doesn't understand that moving and picking up objects is actually a thing all it knows is it can wiggle its fingers around and the reward is so delayed that it gets very little intermediate signal for actually grasping objects all right so let's talk a little bit more about this exploration thing in rl we often refer to the exploration versus exploitation problem as one where at each trial the agent has to essentially choose whether whether they want to do a better job of exploring by trying something they don't know how to do yet or whether they just want to do the thing that gets the largest reward so the agent in montezuma's revenge that's just going after the key each time they die is essentially performing a kind of exploitation they know one thing that gives them reward which is the key and they know one way to get that reward just to die and get the key again and they're just capitalizing on that getting the rewards they know how to get instead of trying to find better rewards elsewhere so there are two potential definitions of the exploration problem in light of this the first is how can an agent discover high-reward strategies that require a temporary extended sequence of complex behaviors that individually are not rewarding and the second is how can a nation decide whether to attempt new behaviors to discover ones with high reward or continue to do the best thing it knows so far and these are really the same problem because if you want to discover temporally extended sequences of behaviors that lead to high reward you need to decide whether you should be exploring more or whether you've already found the most temporal extended sequence and you should just keep doing that or maybe refine how well you do that so they're actually the same problem exploitation is doing what you know will yield the highest reward exploration is doing things you haven't done before in the hopes of getting even higher reward and the trouble is you don't know which one of those you should be doing and of course they're not totally disjoint so for example in some cases you might want to exploit a little bit so that you can explore further if you figured out how to go to the second room in montezuma's revenge a good way to explore is to exploit a bit to go into that second room and then explore from there so it's not like you just have to flip a coin and decide between exploitation exploration it's really kind of a dynamic and persistent decision you have to keep making so here are a few examples which i borrowed from some of david silver's lecture notes imagine you have to select which restaurant to go to perhaps not something that you're doing in 2020 but uh you know in the previous year we lived in back when going to restaurants was a thing exploitation would mean that you go to your favorite restaurant exploration means you try a new restaurant now this example makes it seem very binary and i think that binary sense is a little misleading because in reality it might be more complex than that like the example of montezuma's revenge i mentioned before where the best way to explore might actually be to exploit a little bit and then explore from the last that you landed online ad placement this is a classic exploration exploitation trade-off problem exploitations mean you show the most successful ad the one that makes you the most money exploration means you show a different perhaps randomly chosen advertisement oil drilling exploitation maybe you drill the best known location exploration find a new location to drill at which might not contain oil or it might contain even more oil now exploration is very hard both practically and also theoretically it's a theoretically hard and intractable problem so a question that we might ask when we go to devise exploration algorithms is can we derive an optimal exploration strategy and that's actually what we're going to talk about in today's lecture but in order to do that we have to understand what does optimal even mean so one of the ways that we could define the optimality of our exploration strategy is in terms of regret against a bayes optimal strategy and we'll make this more formal later but intuitively you could imagine a perfect bayesian agent that maintains the uncertainty about how the world works and therefore makes optimal exploration decisions maybe optimal decisions to optimally resolve the unknowns about the world and now such an optimal bayesian nation would be intractable it would require estimating a really complex posterior over your mdp's but you could use this as a gold standard and for your practical exploration algorithm measure its regret against this bayes optimal hypothetical agent we can kind of place different problem settings on a spectrum from theoretically tractable to theoretically intractable theoretically tractable means that we can quantify or understand whether your given exploration strategy is optimal meaning that it's close to the space optimal strategy or sub-optimal meaning it has much worse regret than the base optimal strategy intractable means that we cannot make this estimate exactly in that setting so the most theoretically tractable problems are what are called multi-armed bandit problems you can think of multi-armed bandit problems as one-time step stateless rl problems so in rl you have a state an action and the action leads to the next state in a bandit you only take one action and then the episode terminates and there is no state so you should have to decide on an action and these are the most theoretically tractable problems because in multi-armed bandits we can actually understand which exploration strategies are theoretically optimal and which ones are not optimal in terms of their regret versus the base optimal agent then the next step up are the contextual bandit problems contextual bandit problems are just like multi-armed bandits only they do have a state so they still only have one time step you still only take one action your action only affects your reward it does not affect the next state but you have some context which is kind of like your state so you so uh ad placement could be one such problem you observe something about the user maybe you have a feature vector about the user and then you have to select which ad to show to that user next step up are small finite mdps so these are mdps that can be solved exactly maybe using value iteration these are not nearly as theoretically tractable as bandits but there are some things we could say about exploration in small finite mdps and then of course the next step up the setting we're really concerned with in deep rl are large infinite mdps perhaps with continuous state spaces or very large state spaces like images and generally for these problems there isn't much that we can say theoretically but what we can do is we can take inspiration from the theoretically principled algorithms that we can devise in the banded setting and then kind of adapt similar techniques in the large infinite mtps and hope that they work well so what makes an exploration problem tractable well for multi-armed bandits and contextual bandits one of the things we can do is we can formalize the exploration problem as another kind of mdp or rather a partially observed mdp and palm dp so while the multi-arm band is a single step problem you can view the problem of exploring in the multi-armed bandit as a multi-step problem because even though your actions don't affect your state they do affect what you know so if you explicitly reason about the evolution of your beliefs that now forms a temporal process which is technically a partially observed mdp and then you could solve it using pomdp methods and because they're you know these multi-armed bandits are fairly simple even the pomdp can actually be solved tractably at least in theory and then the next step up are small finite mdps here you can frame exploration as bayesian model identification and then reason explicitly about things like value and information you're kind of extending similar ideas to the ones we had in abundance for large or infinite mdps these optimal methods don't work in the sense that we can't prove anything about them but we can still take inspiration from the optimal methods in the simpler settings and adapt them to these larger settings and find that they actually work well at least empirically even though we can't say anything about them theoretically and of course we use lots of hacks as we always do in deep reinforcement learning and that's the theme that you're going to find in this lecture that will have some very principled approaches in simpler smaller problems like multi-armed bandits we'll sort of adapt those approaches by analogy and larger mdps and then use some hacks to make them work well in practice okay so let's start with a little discussion of bandits uh what's a bandit anyway so the bandits that we're talking about when we talk about exploration are not these guys uh it the bandit is actually kind of the the drosophila of exploration problems so in the same way that biologists study fruit flies as their kind of simple model organism in reinforcement learning we study the bandit as our simple model organism and the bandit that we're referring to is this thing so the term multi-armed bandit is kind of one of these quaint american colloquialisms that uh uh stems from the term one arm banded so the one-armed bandit is a slot machine it's a machine in a casino where you pull the lever and with some random probability this thing will uh produce some reward maybe you'll get you'll lose your money or maybe you'll get money the multi-armed bandit so in a one-arm ban you have only one action just to pull the arm and you don't know what the reward for pulling that arm is and the reward in general will be stochastic so it's really a reward distribution you can think of a multi-arm bandit as a bank of different slot machines and the decision you have to make is which slot machine to play so you have n of these machines and different machines will give different payoffs they'll have different reward distributions now just because you pulled one of the arms doesn't mean that's a bad arm maybe you pulled that arm and you got very little money but that's just because you got unlucky maybe in general that machine gives very high payoff and if you pull the arm repeatedly on average you might make a lot of money so you don't know the reward for each arm you don't know the reward distribution for each arm so you could assume that the reward of each arm is distributed according to some probability distribution and then you can imagine even learning this probability distribution so there's an unknown per action distribution for each arm so how can we define the bandit well we assume that the reward for each action is distributed according to some distribution and the distribution for action ai is parameterized by a parameter vector theta i so for example if your rewards are 0 1 you might be in a setting where the probability of getting reward 1 is theta i which is just a number and the probability of getting rewards 0 is 1 minus theta i if your rewards are continuous maybe you have some continuous distribution and you don't really know what the theta i's are but you could assume that you have a prior on them you could use an uninformative prior if you like but in general we'd say we have some prior p of theta okay so that's defining our bandit now the cool thing about this is that you could also view this as defining a palmdp for exploration where the state is the vector of thetas for all of your actions now you don't know the state but if you knew the state then you could figure out which what the right action is so instead of knowing the city of belief so you have some belief p hat over theta one through theta n and you can update your belief each time you pull an arm so each time you pull an arm you observe the reward of that arm and you can update your belief about the theta corresponding to that arm and you could solve this plum dp to basically figure out what is the right sequence of actions to maximize your reward in this pomdp and this will yield the optimal exploration strategy because if it is the optimal policy in the palmdp it is the optimal to the thing to do under this kind of uncertainty and that will be the optimal exploration strategy the best exploration strategy you could possibly have now this is overkill the belief state is huge even for a simple palmdp with binary rewards remember your belief state is not the vector of thetas it's actually a probability distribution over thetas so even in the simple binary reward uh bandit the thetas correspond to probability of getting a reward of one the b hat of theta now needs to be some parametric class maybe a bunch of beta distributions you could have covariances between the different thetas so it's potentially a really complex belief state and the cool thing about bandits is that you can probably do very well with much simpler strategies than solving this full pumpdp and the way that you would quantify doing well is by quantifying the regret of your strategy relative to how well actually solving the pomdp does so when we say that a particular exploration strategy is optimal what we really mean is that it is not much worse than actually solving the pump dp and not much worse is usually defined in a kind of an oh in a big o sense so how do we measure the goodness of an exploration algorithm well we do it in terms of regret and regret is the difference from the optimal policy at time step capital t so you can write the regret as uh capital t times the expected value of the reward of a star that's the optimal policy minus the sum of rewards that you actually got so the optional policy will always take a star and that means that if you're going for capital t steps it'll be capital t times the expected reward of a star so that's what the optimal policy will do and then your regret is the difference between that and the sum of rewards that you've actually gotten from running your strategy so this is the expected reward of the best action the best you can hope for an expectation and this is the actual reward of the action that was actually taken all right so in the next portion i'm going to talk about how we can minimize regret in terms of closing the gap between our tractable strategies and this plum dp that we've defined |
CS_285_Deep_RL_2023 | CS_285_Lecture_11_Part_4.txt | all right next let's briefly discuss how we can use these uncertainty aware models for control and go through a few examples of some papers that have actually used things like this all right so let's say that we've trained our uncertainty aware model perhaps by using a bootstrap ensemble and now we'd like to use it in our model based rl version 1.5 algorithm to actually make decisions so before when we were planning we were essentially uh optimizing the following objective we were optimizing the sum from t equals one to h of the reward at s t a t where s t plus one is equal to f of s d a t so whether you use random shooting cem whatever this is essentially the problem that you're solving now we have n possible models and what we would like to do is choose a sequence of actions a1 through ah that maximizes the reward on average over all the models so now our objective is the sum over the models times one over n of the sum over the time steps for the rewards for the states predicted by that model where s t comma i is given by the dynamics for model i so this is the case if we if you learn a distribution over deterministic models uh if you have stochastic models then uh you would have an expectation for each model with respect to its distribution so in general for some candidate action sequence a1 through ah step one is to sample a model from p of theta given d which if you have a bootstrap ensemble amounts to just choosing one of the n models randomly step two at each time step sample st plus 1 from p of st plus 1 given stat and that parameter vector that you sampled from the posterior step 3 calculate the reward as the sum over all the time steps uh for those predicted states and then step four is to repeat steps one through three to accumulate the average reward as necessary so this would be the recipe for any general representation for the posterior b theta given d if you have a bootstrap ensemble you could also sum over all the models instead of sampling them that can be simpler if you have a small number of models if you estimate your posterior with some other method like a bayesian neural net then you can sample multiple different random parameter vectors and estimate the reward for each one now this is not the uh only option that you could have this is a sampling procedure for for evaluating the reward you could imagine other procedures for instance you could evaluate possible next states from every model at every time step and then perform something like a moment matching approximation to figure out an estimate of the actual state distribution of the actual distribution p of st plus one for instance by estimating its mean and variance and other methods have done things like that as well but a simple procedure is to use this uh this process that i have on the slide to evaluate the total reward of every candidate action sequence a1 through ah and then optimize over the action sequences using your favorite optimization method like random shooting or cross-country mountain it's also possible to adapt uh continuous optimization methods like lqr to this setting uh in that case something called the re-parametrization trick can be very useful which we will cover uh next week okay so take a moment to look over the slide make sure that this really makes sense to you this is a pretty important slide to understand if you want to know how to implement a model based rl with epistemic uncertainty okay so does this basic scheme work well here are some plots i'm going to show this from a paper called deep reinforcement learning and a handful of trials so before we saw on the half cheetah task how this uh model based rl version 1.5 algorithm could get us from a reward of zero to about 500 if we actually implement epistemic uncertainty using bootstrap ensembles then we can get model based rl to get a reward over 6 000 in about the same amount of time as the 1.5 algorithm so especially in low data regimes these epistemic uncertainty estimates really do make a really big difference in performance here's a more recent illustration of a model based rl method with uh an ensemble of models this is actually a real world robotic experiment where this robotic hand is manipulating objects in the palm this is using essentially model-based url version 1.5 with a particularly sophisticated model and an ensemble for uncertainty estimation and this hand learns by directly by interacting with these objects and about three hours they can perform a full turn with both objects in the palm so this is 1.5 hours and there's two hours and it can do a full 180 turn and then after four hours it can do it pretty reliably so something about the uncertainty estimation really does seem to work and it does seem to be quite important for these model-based star real methods so if you want to implement model-based rl highly recommended to consider uh epistemic uncertainty estimation if you want to learn more about epistemic uncertainty and model based rl here are a few suggested readings this paper by mark desinroth called pilco this is an older paper from around 2011. this paper uses gaussian processes rather than neural nets for model learning but this was a sort of one of the foundational papers in really establishing the importance of epistemic uncertainty estimation in model-based reinforcement learning it has some good discussion of why it matters so i would encourage you to read this if you're interested in the topic more recent papers uh this is the uh the model based url version 1.5 paper that i mentioned before that does kind of okay but not great on half cheetah this is the paper that introduced the the ensembles from model based rl and managed to get you know comparable results to model free rl this is a paper that i would encourage you to check out if you're especially interested in the intersection of model based model free rl that uses models for estimating value functions uh and this is another uh closely related paper that also integrates epistemic uncertainty at multiple different uh points in the previous method |
CS_285_Deep_RL_2023 | CS_285_Lecture_1_Introduction_Part_3.txt | but why should we study deep reinforce learning today well as I mentioned earlier recent progress on data-driven large-scale AI systems has led to some pretty impressive results but the methods that are trained to Simply copy data produced by humans they're mainly impressive because they produce things that look like human generated results but in many cases we actually want algorithms that will do better than the the typical human data either because the human data is not good or because it's hard to obtain or because we really do want the highest possible performance like in the case of alphago we want solutions that are impressive because the machine didn't need to be told how to do something because it discovered it on its own because it discovered a solution that was better or because I discovered a solution in a situation where it didn't have the benefit of human foresight to provide the kind of training data that it needed so recall that a lot of these very successful data driven methods work on the basis of density estimation which has particular implications it means that that these methods will produce the kinds of data that humans tend to produce but it also means that they in some sense won't go beyond uh good human behavior they might be much better at indexing into human data as a certainly the case with large language models they have a lot more knowledge but not necessarily better at utilizing that knowledge to solve concrete problems if you tell a large language model for example to persuade somebody that it's uh you know in their best interest to go see a doctor the language model probably won't be able to persuade them much better than a person would despite the fact that it has this huge uh repository of Internet knowledge to draw so where does that leave us well we've got these data-driven AI systems that learn about the real world from data potentially huge amounts of data but they don't really try to do better than the data in any meaningful sense and we've got these reinforcement learning systems and they can optimize a goal with a merging behavior and that seems like something that should address one of the major shortcomings of these data-driven AI methods but of course we need to figure out how to use these reinforced learning Methods at scale we need to combine them with the kinds of huge models and huge data sets that have been so successful and that's really where the Deep part in deep reinforced learning comes in so data driven AI is all about using data reinforcement learning is all about optimization deep reinforced learning is about this kind of optimization at scale and data without optimization basically doesn't allow us to solve new problems in New Waves it might allow us to be very good at indexing into large data sets to figure out solutions that are human-like but not necessarily solutions that are superhuman something that I I often like to bring up in the context of this discussion is an article written by Richard Sutton so Richard Sutton is actually one of the pioneers of reinforcement learning he was basically the person who popularized reinforcement learning and computer science whereas previous to that it was really a subject of study primarily in psychology so in many ways he sort of founded the study of reinforcement learning in CS Richard Sutton wrote an article in 2019 called The Bitter lesson for those of you that haven't read it I very strongly encourage you to um to read through it it provides a very concise and very clear explanation or why we've seen this revolution in data-driven AI over the last few years and in that essay he writes that we have to learn the better lesson the building and how we think we think does not work in the long run the two methods that seem to scale arbitrarily are learning and search what he's arguing here is essentially that if we want very powerful learning machines we should build machines that are very good at using data and very good at being scaled up and not necessarily worry so much about engineering these systems so that they solve problems the way that we think that humans solve problems as an example we might imagine building a system for detecting cars by somehow engineering some detectors for like wheels and uh and headlights and and things like that and then try to program in that well a car someone has four wheels and like two headlights on the front and two in the back so if you see some wheels and some headlights well that's probably a car and we can basically program that in and that's actually how people used to build computer vision systems um maybe about a decade ago but these days we very rarely build perception systems that way instead what we do is we get lots of examples of cars labeled them as cars and let the computer figure it out and that's basically what Richard Sutton is saying that let's not worry so much about building in how we think the problem should be solved and let's instead of focus on scalable learning machines the machine Learning Community has had sort of a Perpetual debate about the degree to which we should be building in these kinds of components and that's why this this article uh was so influential but a lot of people who read this article take away kind of a funny uh impression Maybe that the emphasis is really on just scale and not really on the particular algorithm that is being scaled up so maybe it's okay if we just take let's say supervised learning methods and as long as we can figure out how to basically Shuffle more data into gpus or build larger server Farms that's really all that matters data plus lots of machines lots of computers and not not worry about how the problem is solved but that's not actually what the essay says notice how it says learning and search it doesn't say learning and gpus it doesn't say learning in Big Data so it says learning and search and there's a very important reason for that learning is about extracting patterns from data you look at the world you pull in some data and you train some learning machine on that and it finds the patterns that are in there search is about using computation to extract inferences Rich Sutton is using the term search in a very particular very technical sense that it's commonly used in reinforcement learning search doesn't mean like a star search necessarily search means some kind of computation or optimization that you use to extract inferences so search is not about getting more data searches about using what you've got to reach more uh interesting and more meaningful conclusions search is essentially optimal Center some kind of optimization process that uses typically iterative computation to make rational decisions and it's important to have both of those things because learning is what allows you to understand the world and searches what allows you to leverage that understanding for interesting emergent Behavior and you really need both if you want to have flexible and rational and optimal decision making in real world settings you need to understand how the world works and then instead of just using your understanding to regurgitate what you've seen before use that understanding to find a better solution than what you've seen before that's basically what deep reinforced learning tries to do data without optimization doesn't allow us to solve new problems in new ways optimization without data without experience it's hard to apply in the real world that side of things like simulators where you can write down equations of motion but if you have both of those things then you can start to solve real world problems in more optimal ways foreign here where this view is not just about how to control robots or how to play video games I specifically emphasized in the previous section that deep reinforcement learning methods have been applied very fruitfully to arrange our other domains too and there's actually a deep reason for this to try to understand this reason let's ask a very basic question let's ask the question why do we need machine learning and as an aside to help us answer that question we can ask an even more basic question why do we need brains the neuroscientist Daniel walper who knows quite a bit about brains had this to say on this topic he said we have a brain for one reason and one reason only and that's to produce adaptable and complex movements movement is the only way we have affecting the world around us and I believe that to understand movement is to understand the whole brain now it won't surprise you to know that Daniel walpert works on the Neuroscience of motor control but I think this quote is very thought-provoking and I think we can apply the same intuition to machine learning and formulate this posture load perhaps we need machine learning for one reason and one reason only and that's to produce adaptable and complex decisions that makes a lot of sense in the same way that your brain is only useful to you so far as it moves your body because that's the only way that it affects the world around it the machine Learning System is only useful insofar as it makes good decisions because that's the only thing it's outputting and now we can start to view all machine learning problems to this lens not as a problems of prediction but as problems of decision making this is obvious if you're controlling a robot your decision is how to move the joints it's obvious if you're driving a car your decision is how to steer the car but even something like a computer vision system in the end of the decision making system if I make a decision which could be the image label but really the decision has implications what happens Downstream of that image label maybe this perception system is detecting how many cars there are at an intersection and that label will be used to determine how to route traffic so it has long-term implications maybe the computer vision system is detecting people in a security camera and it's going to call security if it sees someone where they shouldn't be well that's definitely a decision that could lead to some very complex and very difficult to model outcomes if you view all of the outcomes of machine learning problems as decisions then it becomes clearer that all of machine learning problems are really reinforcement problems in Disguise it's just that in some cases we have the privilege of supervised label data that can Aid Us in solving and while this perspective might be a little bit reductionist I think it's important to keep in mind because it really tells us those building blocks learning and search are not just special things that we want for robots and video game playing but they're really General building blocks of AI systems and that brings us to some big questions like how do we build intelligent machines very general intelligent machines not uh just machines that can detect objects and images but you know things like this or this or this or if you are more um an affairsync line things like this right the kinds of intelligent machines that were popular as in science fiction that capture the imagination maybe they're quite a ways away but how do we start taking steps towards this kind of thing um I think deeply enforced learning forms a significant part of that and I think if we study it now we might put ourselves on the path to eventually answer some pretty fundamental questions so why should we study deep rainforced learning today well part of the answer is a big end-to-end trained models seem to work quite well if we use large data sets and large models like Transformers we can solve some pretty impressive problems and at the same time we have RL algorithms that we could feasibly combine with deep neural networks we've figured out a lot about how to implement RL algorithms so they can be used to train these kind of big end-to-end models and yet learning based control and truly open world settings remains on a major open challenge there are some initial results including the robotics results I presented the results in other domains that show the inkling of the capability of these systems but a lot of potential has yet to be realized and I'll talk about some of that potential in today's lecture and also over the course of this uh class and also discuss how some of these ideas can maybe bring us closer so it's a very exciting time I think to study this topic because in some ways many of the puzzle pieces are falling into place and yet major questions remain which could be questions that you yourselves could answer in your own future work but before I get into that I want to discuss a little bit about the broader picture of the reinforced learning field besides the basic problem of maximizing reward functions what are other problems that we need to solve to enable real world sequential decision making because this course is not just about reward maximization it's also about a variety of other problems that crop up and we study decision making and control in realistic data-driven settings and the kinds of methods that could address it for example basic reinforcement learning deals with maximizing Rewards but this is not the only problem that matters for sequential decision making we'll cover more advanced topics like learning reward functions from examples which is referred to as inverse reinforcement learning transferring knowledge between domains like transfer learning and metal learning learning to predict and using prediction to act and so on here's one question where do rewards come from if you're playing a video game it's pretty obvious maybe the reward function is the score in the video game you kind of don't have to think about it very hard but in other settings you do what if you want to get a robot to pick up a jug and pour a glass of water well any child could do this but just figuring out the reward function is the water in the glass is itself a complex perception problem there's a there's a paper that was published by some folks at UC Berkeley on exploration actually about four or five years ago and had this nice quote as human agents we are accustomed to operating with rewards that are so sparse that we only experience in them once or twice in a lifetime if at all what this means is that a lot of the things that humans do that are very impressive their War might be so delayed that it's very difficult to imagine learning just from that reward signal for example the reward that you'll receive for um let's say completing a PhD degree uh you'll only get that reward once and you maybe experience some satisfaction the real outcome might be what you do afterwards with that degree and yet you might set yourself on the path to do that clearly it's not something that you learn through trial and error by attempting many many PhD degrees in the past this is actually a quote that was posted on Reddit where the comments are applied with by writing I paid the author so we know that there is actually a structure in the human brain the basal ganglia which is actually responsible for the reward signal that the brain uses for reinforcement I mean this is actually something that's been studied quite a lot and it's a non-trivial structure you can see it you know it takes up quite a bit of space so clearly it's doing something sophisticated and it's not hard to imagine that for example for a cheetah that it needs to chase down a gazelle well if the cheetah learns through trial and error receiving the reward only when it caught the gazelle that's a pretty ridiculous image of a learning system if the cheetah just runs around on the Savannah randomly hoping to randomly stumble into a gazelle then randomly eat it and only then realize that kitchen gazelle is a good idea well that cheetah will probably die of starvation of course cheetahs don't learn in this way they might learn from observing other cheetahs they might learn from their their parents they might learn from all sorts of other signals but clearly they're not learning from rewards obtained only from eating the meat of the gazelle at the end of a successful hunt right so there's a lot that goes on into these reward signals uh and you could imagine extracting other more uh useful forms of supervision you could learn from demonstrations either by directly copying The observed Behavior or even inferring rewards from sort of behavior by something called inverse reinforcement learning you can learn from observing the world learn to predict what will happen next even if you're not sure what you're supposed to be doing and then leverage that knowledge later once you're more aware of what your task is you can employ unsupervised learning on supervised feature extraction things like that you can also transfer Knowledge from other tasks you can even use metal learning where you learn to adapt more quickly from your past experience of solving other tasks and these are all things that we could try to leverage and these are all things that we'll actually learn about in this course here's an example of imitation learning this is actually a fairly old example at this point from about 80 years ago from some work from Nvidia showing a purely imitation based method of poor autonomous driving now this method tries to directly copy the actions with the observed human driver but of course you could do a lot better you could for example in further intent and we know this is something that humans do this is a psychology study uh here the test subject is the child on the right hand side now you can see the child here is not going to try to imitate what the experimenter is doing because clearly the experimenter is not doing something very smart what the child will do instead is in further intent and then taking a very different sequence of actions that is better for fulfilling their intent rather than simply copying them this is really the Hallmark of human imitation when we say that a person imitates somebody else they're not literally observing someone's muscle activations and Performing the same muscle activations at some level they're always inferring something about what that other creature or person is attempting to do and then doing it in their own way it might be very literal where they still carry out the same motions but figure out the commands to their muscles that will create those motions or might be even more abstract like it is here where they carry out entirely different actions but that lead to the seemingly desired okay inverse enforce learning algorithms can be actually used with robots this is again work that's at this point pretty old it's about eight years old that shows an inverse reinforcement algorithm or this robot infers the intent of the human demonstrator showing this poor emotion figures out that the point is to really seek out that yellow cup and to pour the content of the orange cup into the yellow cup and once it inferred that intent then it could perform the task in a variety of settings foreign is a really big part of control prediction is separate from how we typically think of model free reinforcement learning but there's ample evidence in neuroscience and psychology the prediction is very important part of how humans and animals learn about their world we could imagine predictive models in a very literal sense where you could actually predict your future sensory readings and you can Implement real world predictive models so here a robot plays around with objects in its environment collects some data and then learns to predict what it will see in response to different actions so the different columns here show predicted future images in response to different motor commands this is quite a while back this is seven years ago so you can see that the predictions here are not a very high quality but they capture the gist of what the robot is trying to do and they can be used to control objects so you can tell it move this particular object marked in red to the green location it'll imagine the movement and then it will actually actuate the arm to move the object in that way so predictive models can allow you to solve new tasks you can use this as a very powerful tool for emergent Behavior you could for example command the robot to move some objects and might figure out that it needs to pick up a tool to move those two objects together uh here's another two use example here it figures out that L-shaped tool can slide the the blue object and here there's an emergent tool use scenario where it figures out to move these two pieces of trash the uh water bottle makes for a nice uh improvised tool and predictive models have really come a long way so in recent years they've gotten a lot better with modern advances in general modeling it is a diffusion based video prediction model that is being used to synthesize clips of driving videos the first three frames here are real the remaining frames are actually synthesized and you can see that the model will actually produce a realistic camera movement it will introduce new objects as the car turns it will um even um predict the motion of the other cars with some Reasonable Doubt in these in these examples by the way the left video inch pair is the real one the right one is the synthetical and here are the same uh model is being run on robotic videos similar to the ones that I showed before just so you can see the contrast from 2017 to 2022 you can see that now the arm is clear and crisp the objects move in realistic ways and so forth there's also a lot of interesting progress especially in the past year on leveraging advances in pre-trained models so when we do reinforcement learning we typically don't have to do it from scratch what we could do is we could use a model pre-train on large amounts of internet data and then use it for control this is actually an imitation learning example it doesn't do well it actually does direct indication but it is doing a learning based control this is the rt2 model which uses a first a language model that is a pre-trained on language then a visual language model that uses that language model to process internet images for things like question answering like what is happening in the images say it's a great donkey walking down the street so now the model understands pictures it understands text and then that model is further fine-tuned to upload robot actions so that when it's told what should the robot do to pick up the chips it'll output the numerical values for the actions that'll actually pick up the ships so now it can bring in knowledge that it learned from the internet to perform this task more effectively here are some examples of the kind of intelligence tasks that this model can pass so it can be told to move the banana to the bottle the robot data has examples of moving bananas but to understand what it means to move into the bottle that's leverage internet data here it's asked to solve a math problem by putting the banana on the answer of the math problem here it's told to put the strawberry into the correct Bowl to figure out what correct Bowl means it needs to recognize the fruits in each of the bowls and figure out that strawberry Bowl is in fact the correct one foreign here's some more examples pick up an object that is different from all the other objects now it knows how to pick up obvious in the robot data but it doesn't know what different from all the others means from that but that has to leverage internet data and it figures out that the bar is the different object because all the other objects are bottles you can understand instruction in other languages even though the robot data is only annotated in one language and so on okay so these are some examples of the kinds of problems that are that we might study in the context of learning based decision making besides the 4rl problems but to conclude this lecture I want to end on maybe a somewhat more grandiose point I want to come back to this question how do we build intelligent machines and really argue that the basic building blocks of deep RL might be very good building blocks for answering this question this will force a controversial statement I don't expect everybody to agree with me on this this is a big part of why I'm excited about this topic and I hope to convey some of that excitement to you so imagine they have to build an intelligent machine something that as intelligent as a person where would you start well in the olden days the way we would think about this is that maybe we need to understand the brain and the brain has a lot of parts so let's understand what those parts are figure out how each of them work and then write computer programs to emulate the behavior of each of those parts uh of course our modern understanding of the brain is more advanced than what it was in the 19th century but parts of the brain more closely reflect their actual function but this is still a very difficult problem because each of the parts is very complex and if you we have to do a bunch of programming to code up the behavior of each of the parts and do a bunch more coding to wire them together we might be at this for a very long time that might just be a very very difficult way to implement an intelligent machine it might actually take a lot more intelligence on our part than we actually have so if we hypothesized that learning might be the basis of intelligence that might actually offer us a much easier way to address this problem and here's an argument of why learning might be the basis of intelligence there are some things that we can all do like walking so it might be reasonably argued that maybe those things are sort of built into our brains somehow but there are also some things that we can only learn like driving a car clearly driving a car is not built into our brains because cars weren't around when our brains evolved and we can learn a huge variety of things including very difficult things so therefore our learning mechanisms are likely powerful enough to do everything that we associate with intelligence it may be that in practice we don't actually use our learning mechanisms for some things like walking but we might hypothesize that maybe they're powerful enough that if we didn't have those things built in we could figure it out anyway that may or may not be true but I think uh there's a pretty good reason to believe this might be true it might still be very convenient to hard code a few really important bits but let's not get distracted by that part we can further hypothesize that not only is learning the basis of intelligence but in fact maybe there's actually a single learning procedure that underlies all that we associate with intelligent Behavior now that's a more radical statement it basically says that the way that we learn how to see and the way that we learn how to talk and learn how to hear is at some level the same instead of having an algorithm for every module I think we have a single flexible algorithm that placed in the right context increments all of the modules everything that we need in the brain and there are some uh circumstantial evidence to indicate that this might in fact be the case for example um and this these are some slides borrowed from Android you can build an electrode array that you can put on your tongue attach that array to a camera and learn how to perceive visual process through your tongue you can take an animal a ferret you can disconnect the optic nerve from the visual cortex and plug it into the auditory cortex and after a while the ferret will regain some degree of visual Acuity which means that its auditory cortex can essentially learn to process visual settings so these things kind of indicate that perhaps there's a degree of generality or homogeneity to the brain at least for the neocortex such that it can adapt to whatever sensory input is provided which might indicate that there's one algorithm and if there is one algorithm what does this one algebra need to be able to do well it needs to interpret Rich sensory inputs and needs to choose complex actions and to do both of those things we need large high-capacity models because that's the only way we know how to deal with risk sensory inputs and we need reinforcement learning because that's the mathematical formulas we use to take actions so why deep reinforcement learning well the Deep part provides us with scalable learning from large complex data sets and the reinforced learning gives us the optimization the ability to take actions the combination of learning and search deep is great for learning reinforcement learning is the way that we do the search and in fact there is some evidence in Neuroscience for both these things there's evidence that uh the kinds of representations acquired by Deep neural networks have some statistical similarity to representations that are observed in the brain that doesn't mean that the brain works the same way that deep Nets do it just means that at some level when you process lots of data and extract suitable representations they end up looking similar which could have more to do with the fact that a large enough learning machine just pulls out those patterns in the data because that's what the data is made of we're going to say something about deep learning that's I think a much harder question to answer but the evidence suggests that some kind of representational similarity exists for visual percepts for auditory features and even for the sense of touch and the experiment's done done to ascertain the structure a little bit creative where the brain signals indicating the kind of features that in this case monkeys use for touch are obtained from recordings from Monkey neurons the Deep learning experiments done by actually taking a glove dusted with um white dust you getting a person to touch objects and then using a deep neural network to discover patterns in the the dust patterns on the on the glove so interesting experiments suggests that maybe the Cisco properties of features extracted by sufficiently powerful learning machines resemble the Futures in the brain and there's plenty of evidence in favor of reinforcement learnings at least one of the mechanisms underlying decision making in humans and animals in fact uh reinforcement actually emerged as a study of animal intelligence but we know now from evidence that percepts that anticipate reward become associated with similar firing patterns as the reward itself which is exactly what we would expect from a temporal difference learning process uh the basal ganglia appears to be a kind of reward system and that model 3rl like adaptation is often a good fit for experimental data of animal adaptation a lot always but the picture is not complete right so all of these uh kind of bits of circumstantial evidence might suggest that the tools of deep learning and reinforcement learning might be good tools for tackling the problem of intelligence but the problem is clearly not solved we have great methods that can learn from huge amounts of data by using deep learning we have great optimization methods for RL we don't yet have amazing methods that both use data RL has been made much more scalable in recent years it can tackle things like real world robotics problems but the kind of huge scale language model in general modeling applications still primarily use supervised learning so there's still some algorithmic building blocks that are necessary and furthermore humans learn incredibly quickly whereas deep RL methods typically require large amounts of data and humans reuse past knowledge whereas transfer learning URL is still an open problem it's not always clear what the reward function should be and it's not always clear what the role of prediction should be it seems like these methods can be very powerful but how do they fit in with model free methods are they just different things or can they be reconciled in some way so all these question marks I think give us uh ample space for additional research that we can do in this area and perhaps if the tools of deep learning and reinforcement learning are the right tools for building enormously powerful artificial intelligence systems that may be studying these questions can allow us to make some Headway on that problem and ultimately I think that we can get away from this picture thinking of uh intelligent systems as a collection of modules to implement and instead as a very elegant and simple framework where we have a general learning algorithm that can figure out whatever problems is posed to it in fact this idea is Not By Any Means new it's not something that was created in 21st century it's not even some of those created for deep learning or even in the age of machine learning here's a quote that I think very nicely exemplifies this perspective instead of trying to produce a program to simulate the adult mind why not rather try to produce one which simulates the child's If This Were then subjected to an appropriate course of Education one would obtain the adult brain who said this I'll try |
CS_285_Deep_RL_2023 | CS_285_Lecture_21_RL_with_Sequence_Models_Language_Models_Part_1.txt | all right today we're going to talk about RL with sequence models let's start with a discussion of what happens when we go beyond regular mtps so in the beginning of the course we saw that Beyond uh fully observed mtps we can start thinking about partially observed mdps where we only get limited observations of the environment and that's going to get a started thinking about sequence models in RL so the trouble with observations is that unlike the marvian states that we've been using in for example most of our value based or model based algorithms the observations don't obey the markof property which means that simply from observing the current observation you don't necessarily have enough information to infer the full state of the environment which means that previous observations can actually give you more information in contrast with states that is never the case if You observe the current state also knowing the previous States never gives you more information for predicting the future because the current state deseares the future from the past so uh when you when you're operating on partial observations the state is not known and actually in most cases you don't even have a representation of the state so not only do you not know what the current state is you don't even know what sort of the data type of the state is so uh to recap something that we discussed in the beginning of the course with partial observability let's say that uh the the environment is this cheetah chasing this gazelle but your observation is an image of the scene now underlying that observation is some true State let's say the position and momentum and the body configuration of the animals now that state fully describes the configuration of the system in the sense that if you know the current state that tells you everything that you need to predict the future it doesn't mean the future is deterministic it could be the future is still stochastic it just means that previous States won't help you in predicting that future if you already have a current state but if uh you have only observations then the observations could be partial maybe there's like a car driving in front of the Cheeto so you can't see it the state hasn't really changed but the observation now doesn't contain enough information to infer the current state if you look at the previous observation you might get more information now the trouble is that most real world problems are like this so a lot of the algorithms we discussed kind of assume that you have a full State not all of them and we'll get to that in a second but most real world problems don't actually give you a full State and in reality in the real world it's really kind of degrees of partial observability in the sense that all problems are really partially observed and that you never are really given the full you know configuration of the system but sometimes the partial observability is so minor that in effect you can just pretend that the observation is a state and everything would work out fine so Atari games for example are like this where in a lot of Atari games even though they are technically partially observed because the state of the system is like the the ram of the Atari emulator uh in reality the emission contains almost all that information but in some cases they are very partially observed for example if you're driving a car you might have another vehicle uh in your uh blind spot uh for example for this red car you might not might not see the blue car or the truck but they're very relevant to its future state so these are situations partial absorbability really matters uh if you're playing a video game with first person observations there might be a lot in the video game that is very relevant maybe things you've seen in the past that are very important to remember in order to play the game effectively but that you can't see in the current observation another example of a setting where partial observability is extremely important is interaction with other agents if you have a robot that is supposed to interact with humans the mental state of the human is actually uh the unobserved part of the state so you might observe what they say or do but you don't necessarily get to observe what is in their mind what is their uh desire what is their preference what do they want to get out of the interaction and that's a very complicated instance of partial observability another example of partial observability is dialogue uh if your observations are uh textual strings uh it could be for human interaction could also be that you're interacting with let's say a text based game or something or or even a tool like a Linux terminal uh in in that case the history of the interaction really matters and just the current phrase like the last word that you saw doesn't really convey all that much information by itself so these are all examples of partially observed settings now partially observed mdps can be really really weird um we can make them less weird with a little simplification but if we just uh approach them naively there are a lot of things that happen in partially observed mdps that simply cannot happen in fully observed mdps one example is Gathering actions so under partial observability it can be optimal to do things that don't actually lead to higher rewards by themselves but get you more information about where the reward uh the rewarding things might be for example if you're traversing a maze if you just uh treat it as a fully observed task maybe your uh state is the position in the Maze and you just have to run RL on this maze until you solve this one maze uh then everything is perfectly fine and the optimal action is always to move towards the exit but imagine that you are trying to solve a distribution of Mazes so you're trying to get a single policy that can solve any maze now this is a partially absorbed problem if you don't get to see the entire maze right from the start if you just get let's say a first- person view because now um the unobserved state is the configuration of the maze that you're in so in that situation it might actually be optimal to like peek over the top of the Maze and try to observe where uh all the intersections are even though that information gathering action by itself doesn't actually get you closer to the exit so information gathering actions are something that emerges in optimal policies and pdps that never emerges in fully observed mdps another kind of weird property is that partially observed mdps can lead to stochastic optimal policies uh whereas in fully observed mdps there always existed a terministic policy that is optimal it doesn't mean that all all optimal policies are deterministic there could be an equally good policy that is stochastic but in fully observed mdps you will never get into a situation where only stochastic policies are optimal whereas in partially observed mdps that's actually possible uh here is a really simple example uh let's say that you have a three-state mdp where you you can be in state a b or c the reward is always in the middle so the state B has a reward of plus one and your probability of starting in each state is 0.5 in state a and 0.5 in state C so you're 50% likely to start on the left and on the right and let's say that now you make this partially observed where your observation contains no information so essentially in a partially observed mdp of the sour since you get no observation at all you basically just have to commit to an action either left or right and a deterministic policy would have to choose to either always go left or always go right now if it chooses to always go left then if it starts in state C it'll eventually arrive at the good State B if it starts at State a it'll never arrive at State B if it commits to always going right then if it starts in state a it'll get the reward but not if it starts in state C and since the deterministic policy here would have to be a function of the observation and the observation is no information the only choice for deterministic policy is to commit to always going left or always going right but if you have a policy that goes left or right with 50/50 probability then whether it starts in a or C it'll eventually get to B so this is an example where stochastic policy is actually better than any deterministic policy okay now at this point we could ask what which of the RL algorithms that we learned about before can actually handle partial observability correctly now we have to be really careful with this question because what does it mean to handle it correctly um so we'll get to that in a second but now but first let's go over the different methods uh and then we'll discuss this so uh I'll discuss three methods three class methods really policy gradients the first one that we uh disc discussed which constructs an estimator of the policy gradient using some kind of Advantage estimate using this familiar grad log Pi formula that we saw before so could we in policy gradient simply replace the state with the observation uh just feed in the observation to the policy and just use exactly the same gradient estimator that's a good question value based methods could we uh simply take let's say the Q learning equation and simply replace S with o is that a valid thing to do and modelbased RL methods let's say the simplest kind of model based RL method where we uh train a model that predicts the next state given the current state in action and then plan through that model can we simply replace S with o in this case and of course this is a little bit of a trick question because before we can even begin answering this for each of these three methods we have to understand what does handle actually mean what does it mean to handle partial absorbability correctly now take a moment to think about this what would you want out of a method like this like let's say that it was valid to Simply replace the state of the observation what would you hope to get out of the method that works correctly versus what could go wrong with a method that does not work correctly so in all of these cases we're going to be trying to get a policy that looks at an observation rather than a state and produces an action and what we would hope to get if the method is working properly is the best policy that is possible given that we only get to see the current observation so in the example with the three states that best policy would be one that goes left or right with 5050 probability and this is the best reactive policy now of course you can't get you can do a lot better if you get a policy that is not reactive if you get a policy that has memory but for now we're just asking the question can we get the best possible policy under the representational constraints we're under meaning that under the constraint the policy only gets to look at the current observation so it's the best policy in the class of memoryless reactive policies we can't hope to do better than that unless we actually change the policy class for now we're not changing the policy class we're just changing uh we're just varying the algorithm and trying to replace the states with observations directly so handle means find the best policy in the class of memoryless policies okay so for this notion of handle take a moment to think about whether we would get the best policy in the class of memoryless policies with naively replacing states with observations for policy gradients value based methods and model based ARA well let's start our conversation by talking about policy gradients so it's very tempting to just say well if we want a policy that takes in the observation outputs the action let's use the same grad log Pi equation and and just naively swap out S4 is this correct well interestingly enough our derivation of the policy gradient uh going back to the beginning of the course never actually assumed the markof property it assumed that the um the distribution factorizes meaning that the chain rule of probability can be applied but that's always true it didn't actually assume that the state that was going to the policy Des searates the future from the past so it's actually totally okay to use the this grad log Pi equation however the advantage estimator takes a little bit of care because there are multiple ways to estimate the advantage and policy gradients and some of them can get us in trouble whereas others are totally fine to use so the key point is that the advantage is a function of the state St the advantage is not necessarily a function of the observation OT so the advantage does not depend on St minus one but if you don't have the state then you might get in trouble so that's why it's totally okay to use RT plus the next value minus the current value as your advantage estimator with some function approximator for V because when you're training a function approximator for v as a function of State you're basically leveraging the property that every time you see the state s um you're going to expect us get the same value regardless of how you got to the state as so that's why V hat only needs to be a function of the present state it doesn't need to take past States into account because of the Markoff property the Markoff property tells us the value is only going to be a function of the current state not uh dependent on how you got to that state but of course that's not true for observations so you can't simply swap out the argument to V hat and replace St with OT so it's not okay to train vat of OT because the value depends on might depend on past observations because the current state might depend on past observations so what this means is that if you're going to use policy gradients if you use the regular Monte Carlo uh estimate if you just simply plug in the sum of rewards that is okay because that derivation did not actually use the markof property but if you try to put in a value function estimator that is no longer okay because that value function estimator for the advantage function um is um not uh a function of the observation it's a function of the state and the state depends on past observations so this type of estimator is not okay now as a pop quiz something that I might suggest that you all think about for a second is we learned before we started talking about value function estimators and baselines and all that we learned that we could simply take the those rewards that multiply the grad log pi and use that causality trick uh to multiply grad log Pi with the sum of rewards from T to the end rather than from one to the end so is it okay to use this causality trick when you have partial observability take a moment to think about this okay so uh I'll give away the answer the answer is that this is actually totally fine because the causality trick did not use the markof property either it simply use the property that the future doesn't influence the past now the future doesn't influence the past even if you're acting Under Pressure observability so this is actually okay to do and in fact it's possible to prove it by showing that the expected value of grad log Pi multiplying the from past time steps actually averages out to zero just like it does with States what is not okay to do is use V hat as the advantage estimator you might also consider whether it's okay to use vat as a function of observations as a baseline that's also an interesting question it turns out that that is actually also okay for the kind of simple reason that we could use anything we want as a Baseline and the estimator is still unbiased it could be that using a value function that only depends on the observation as a baseline might not reduce variance as much as we would like but it's always unbiased simply because all baselines are unbiased regardless of what they are okay so that's policy gradi so the short version of policy gradients is they are okay to use but you have to be careful with that Advantage estimator what about value based methods can you simply take for example the Q learning update Rule and naively replace states with observations would that actually give you the best memory list policy well the answer here follows the same logic on the previous slide for the same reason that it was not okay to make the value function a function of only the observation that same thing makes it not okay to make the Q function a function of only the observation basically Q learning relies on the Assumption uh that every time you visit the state s regardless of how you got there your value is going to be the same for all the different actions which is absolutely true when you have mobian States but it is not true for observations because if You observe a given observation o your value for different actions might depend on in the previous observations so it might depend on how you got there and that actually makes this Q learning rule invalid so value based methods do not work without the markof property you simply cannot naively substitute the observation in place of the of the state of course if the observation is essentially a movian state like it is in most Atari games this can be close enough and the results might be fine but in general the more partial observability you have the worse this will work and a very obvious way to see this would be that to note that the way you extract a policy from the Q function is to take the action with the largest value that will always be a deterministic policy but we saw before that pum DPS can sometimes have stochastic optimal policies since Q learning never yield a stochastic policy there's there's absolutely no way that it could yield the optimal policy for example in that three-state mdp where a stochastic strategy was more optimal okay what about model based RL methods could we simply substitute o in place of s in our predictive model and then get the correct answer turns out the answer is very much know uh and here's an example to illustrate why this is such a bad idea let's say that we have the following environment we have two doors and we start off uh in a state where we're going TOS approach one of these two doors and we're going to try that door and if it's locked what we should do is we should try the other one and which door is locked or unlocked is going to be random so part of the state is which which door is locked or unlocked you don't get to observe that state you just see that you're in front of the left door in front of the right door so it's a partially obsorb problem you don't observe the state of which door is locked or not until you try it now there is a an optimal strategy here even a memoryless strategy uh which is that if you're in front of a door you should um try it first and then move on to the next one or if you have to be memor memoryless and you're not allowed to remember if you tried the door just randomly decide whether it's to switch to a different door or try the log just like in the three-state example so there is a way to actually solve this even if you don't get to remember what you did before nor do you get to remember uh nor do you get to observe whether the door is locked or not so let's say that you have an observation for being at the left door an observation from being at the right door an observation when you pass through the door and then uh you want to train the model so the model is going to be predicting what's the probability that you get to the pass observation the one where you pass the door given that your current obervation is the left door and your action is to open and let's say that on every episode each door is 50 50% chance so 50% chance the left is unlocked 50% chance that the right is unlocked uh and they're exclusive so you always just flip a coin and unlock either the left or the right door so in half the episodes you'll pass and half the episodes you won't pass so that means that if you try to actually estimate these probabilities if you try to train the model you'll get a probability of 0.5 but what's the what's a good strategy if the probability of unlocking the door is 0.5 well if you have a 50% probability to open the door each time you try which is what this model is actually trying to to represent then you could just get through the door by trying repeatedly if it's 50% each time independently if you just keep trying the door eventually you'll get through it but that's of course not how the world works if you tried the left door and it didn't unlock it's because the door is locked no matter how many times you tried it'll remain locked but this marvian model simply cannot represent that it cannot represent the fact that if you tried the door before it'll not unlock if you try it again again because the probability of O Prime is only a function of the current observation and the current action that you're taking it is not dependent on previous actions in this model so this marvian model simply cannot be used with non-markovian observations because it will lead to these ridiculous conclusions that if you keep trying the lock door eventually it it will unlock the problem is that the model simply the structure of the model simply does not match the structure of the environment in reality the probability you pass is actually zero if the door didn't open before but you can't represent it with this model because the model doesn't take in past observations and actions as input okay now so far we talked about memory list policies but of course that's a pretty artificial restriction especially the door example hopefully illustrates this in reality if you tried the door you'll remember that you tried it before and that it did not loog so you'll know to do something different in the future so of course in practice if we want to get good solutions to partially observe Markov decision processes we we really should employ non-markovian policies that get observation histories as input and there are a few ways that we could approach this one simple way to approach this is to use what is called a state space model so with State space models what we're essentially doing is we are learning a marvian state space given only observations and we saw this before when we talk about variational inference so uh if we train for example a sequence vae where the observables are sequences of observations and the hidden states are sequences of latent states where we have dynamics of the latent space uh with maybe uh zero mean unit variance prior on the initial State and some learned transition probability which is movian and an observation probability that models the distribution of an observation given the current hidden State and an encoder to encode a history of observations into the current hidden State then these Z's will actually represent a markovian state of the environment uh and this can actually work quite well so if if you can learn the sequence V just like we discussed in the variational inference lecture if you don't remember how this works go back to the variational inference lecture and uh recap that if you can learn this then you can actually directly substitute Z in place of s so you can't do the S thing because you don't have state you can't do the observation thing because that's incorrect but you can do it with Z's uh as the state input into the Q function that's actually valid because we train the Z's to obey the markof property because they have mobian Dynamics um now why might this B itself not be a good enough solution to all pumpy pece so this is correct it's valid but why might not not be good enough well the reason it's not good enough is because in some cases actually training this predictive model is very hard and in fact in many cases it's not necessary to be able to fully predict all observations in order to run RL so uh if you could predict all observations for example uh as in the uh the papers that we discussed in the VAR leure where you directly predict the images of these mojoko environments then you can actually use the underlying hidden States as markovian state spaces but this is a harder problem potentially than solving the RL problem like actually generating these images generating all those pixels might be more difficult than recovering the optimal policy so maybe we don't need good predictions to get high rewards here's what we could do what we could do instead is we can observe that the state space model when it runs inference actually uses a history of observations to infer Z so the encoder uh takes all the previous observations and figures out a distribution of the currency that's how the sequence V worked well if we're going to take a history of observations what we what if we just take note that ZT is a function of an observation history so it can contain more information than the observation history so if we use the observation history itself as our state representation it'll contain just as much information as the ZT that we're inferring from these sequence V so what if we just Define our say that way what what if we say that our state St is just all the observations o01 through OT if it was good enough before to infer ZT from o1 through oot that means that o1 through oot contains all the information we need to get a marvian state which means that it should be a markovian state itself so does that work and does that work basically amounts to asking does a history obey the markof property so the markof property just says that the state St +1 is conditionally independent of the state St minus one given the current current state St and now the current state St is all the observations up to T the previous state St minus one is all the observations up to T minus one um and what this what this shows us is that the previous observations tell us nothing that we can't infer from St itself right because St contains St minus1 inside of it the observations o1 through OT minus1 are contained inside the sequence o1 through oot which means that if you already know ST which means you know o through OT finding out St minus one meaning finding out all the previous observations doesn't tell you anything new because that sequence already contains all those previous observations and that's basically the argument for why history states do obey the Markoff property meaning that the sequence of observations up to time T deseares the sequence of observations up to time T +1 from the sequence of observations up to time T minus1 because the sequence up to tus one is contained inside the sequence up to T which means that if we apply Q learning on these history States meaning that our Q function is a function of all the observations o1 through o this actually will work so of course we need to design model architectures that can utilize these history States um so how do we represent a q function that takes an entire history of observations well if we have a conventional Q function like the ones you had for homework 3 for dqn uh which taken a single image you could simply concatenate a whole bunch of images and feed them into the Q function this is actually not as terrible of an idea as it seems now you can only use a fixed Short history of observations let's say you're going to use four observations as input that is not the full history of observations but it might in some cases be good enough heris uh in the sense that if the previous four observations tell you most of what you need to know it might be Marian enough to work but is this bad well it is kind of bad sometimes because you could get pathological settings like that Ma example where you observe the maze you have to remember the whole maze after you peaked over the top and then remember it for the entire episode in that case a short history won't do you really need to remember everything so in the most General case we need to use a sequence model that can take in a variable length history of observations as part of our Q function and then output the Q values at the end and this can be done with any sequence model like an RNN an lstm or a transformer in which case our Q function our policy or our Dynamics model has to be represented with an RNN lsdm or Transformer and that's a perfect reasonable thing to do and you can train it in kind of directly the obvious way the same way you train sequence models anywhere else now there is a little bit of a practical detail that we need to keep in mind with this the Practical detail uh is uh has to do with um computational efficiency so let's just work through an example of a deep Q learning algorithm with histories regular Q learning would collect a transition add it to the replay buffer sample a batch from the replay buffer update the Q function on this batch and then repeat if you want to use history States what you would do is you would collect a transition which now is a tupal otat +1 you would create the history uh for the time step T and T plus1 by concatenating all the previous observations and then add these histories to your buffer and then you would sample a batch of History action next history and then update the Q function on this batch this works this is a valid way to do our Rel with history States but it's super expensive because now essentially the amount of information you're storing is going to scale as uh Horizon squared because for every Horizon you you have uh let's say Horizon's capital T you have capital T time steps and each one of those has capital T observations inside of it so this is very expensive you get quadratic blow up in memory cost it's still correct is just computationally and memory expensive so uh one of the things you could do uh let's say that you're using an RNN or or lstm where the neural network for Q inside of it has some hidden state that is used to read in these observations well what you could do is you could store the RNN States themselves so instead of actually storing entire histories you could say well the um observations o1 and O2 are fully summarized by the hidden state of the RNN H2 and the observations o123 are fully summarized by the RNN State H3 so what you could do is you could just reuse the RNN hidden State essentially every time you load up a history you don't load up an entire sequence you load it up starting from some intermediate point and then you actually restore the RN hidden State at that point and you can do this with rnn's and lsms um I won't go into great detail about this method its basic idea is to essentially use RNN States as though they were uh markovian states of the system which they are except for a little caveat which is that the RNN States change as the RN RNN itself is updated if you want to learn more about this check out the paper recurrent experience Replay in distributed reinforcement learning you can use this trick with rnns and lstms and it works very well for getting very long histories it actually gets really great performance for example on Atari games um it's not clear how to do this with Transformers because Transformers don't have a single hidden Marian state so to my knowledge no one has figured out how to do this with Transformers but for RNs and lstm this a very effective strategy so I encourage you to check it out if you want some practical details so to recap PPS are weird uh there are settings there are things that happen with pdps that never happen with mdps like stochastic policies and information gathering actions um some methods just work in the sense that that they uh recover the optimal memory list strategy but the most efficient ones like value based methods don't because they require using value functions and even those that do work they still get memoryless policies which might not be as good as the best policy with memory we could learn a mobian state space with models like sequence vaes and that is a valid thing to do we could also just use history states which just means using a sequence model to read an observation histories and that can be an efficient way to do things except you need to use sequence models then to represent your value functions policies and models |
CS_285_Deep_RL_2023 | CS_285_Lecture_1_Introduction_Part_2.txt | okay let's talk about what we'll cover in the class so this course goes through a variety of deep reinforcement learning methods construed very broadly but we'll start with some Basics we'll start by talking about how we can take a journey from supervised learning uh methods to decision making methods provide some definitions and generally come to understand the reinforcementing problem then we'll have a unit on model 3 reinforcement learning algorithms where we'll cover Q learning policy gradient and actual critic methods and you'll have some homeworks where you'll Implement each of these then we'll have another unit on model based algorithms we'll talk about planning optimal control sequence models images and things like that and then we'll have a variety of more advanced topics we're going to cover exploration algorithms we're going to cover algories for offline reinforcement learning which are methods that can use both data and reinforcement learning methods we'll talk about inverse reinforcement learning which deals with inferring objective functions from behavior and have some discussion there about the relationship between reinforcement learning methods and things like probabilistic inference and then we'll have a few Advanced topics like metal learning and transfer learning maybe hierarchical RL and a set of research talks and invited lectures so that's the overall overview of the class you're going to have five assignments uh there will be an assignment of limitation learning policy gradients Q learning and after credit algorithms model based RL and the last one will be on offline RL and there will be a final project the final project is a research level product project of your choice you can form a group of up to two to three students and you're more than welcome to start on this project early the students every year have some questions about our expectations for the scope of this project roughly speaking you should think about it as roughly the level of a paper that you might submit for example to a workshop if you're not sure about the scope of your project definitely come into office hours to talk to the Tas or to myself we will have multiple rounds of feedback for you for your project so we'll have a project proposal deadline and our project Milestone report these are really meant for you we strongly encourage you to write up your plan to describe potential concerns you know about your plan and so on and we'll give you feedback on those so the proposal and the Milestone are not greater very strictly they really meant much more as a way for you to get feedback on your project Plan before the final report at the end of the semester you'll be great at 50 on the homeworks 40 on the project and ten percent on quizzes after every lecture and you'll have a total of five or eight days for your homeworks don't exceed those five late days those five late days total uh if you exceed them then we uh you know unfortunately cannot give you credit for that homework uh you also have a little bit of homework for today uh make sure that you're signed up for Ed uh UC Berkeley cs285 all of you who have been signed up for the enrolled in the course officially would have receive an invitation for this we strongly encourage you to start forming project groups unless you want to work alone which is fine and take the lecture one quits so the lecture one quizzes post Seven Grade scope the lecture one quizzes uh very much a practice quiz it's not a there's not a real quiz there uh and it's really to be familiar with a great scope interface however what I want to focus on mainly in today's lecture is discussing why we should study reinforced learning what it is and a little bit of context for why I myself like to teach this class but let's start with some Basics what is reinforcement learning well reinforcement learning is really two things it's a mathematical formalism for learning based decision making and it's also an approach for learning decision making and control from experience and it's important to keep in mind these are somewhat separate because we could imagine taking the formalism and then applying all sorts of different methods to it so it's important not to confuse reinforcement learning the problem from reinforced learning the solution okay how does this differ from other machine learning topics well the kind of machine learning that most of you are probably familiar with is supervised learning supervised learning is fairly straightforward to Define you have a data set of inputs and outputs we refer them typically as X and Y and you want to learn to predict y from X so you want to learn some kind of function f of x which outputs values y that are close to the Y labels in the data set so for example F might be represented by a deep neural network that you would train via classificational regression to match the labels y and while the basic formulation of supervised machine learning is very straightforward supervised machine learning methods make a number of assumptions that we often don't even think about because they're so natural but that are important to bring up if we're going to discuss how this difference from reinforcement learning supervised learning typically assumes what is called independent and identically distributed data this is such an obvious assumption in some ways especially for someone who studied machine learning that we often don't make it explicit but what it means is that all of these X Y pairs in your data set are independent of one another in the sense that the label for One X doesn't influence the label for another X and they're distributed identically in the sense that the true function that produced the label y from X is the same for all the samples it's almost an obvious statement but it's something that is important to keep in mind supervised learning also assumes that our data set is labeled in the sense that every X we've seen in D also has an accompanying Y and that Y is the true label for the X this is very natural if you're doing things like image classification with labels obtained from humans but remember how we discussed in the grasming example this can actually be pretty unnatural if you want a robot to learn how to grasp objects it's actually a very strong assumption to assume that you're given a set of images with ground truth optimal grasp locations reinforcement learning does not assume that the data is independent identically distributed in the sense that previous outputs influence future inputs things are Arrangement temporal sequence and the past influences the future typically the ground truth answer is not known it's only known how good a particular outcome was whether it was a failure or a success or more generally what its reward value was so in reinforcement learning you might collect data but you can't simply copy that data that doesn't actually lead to success the data might tell you which things were successful and which things fail although even those labels are difficult to interpret properly because if you have a sequence of events that led to a failure you don't know which event which particular choice in that sequence was the one that precipitated the failure this is not unlike human decision making perhaps you've got a really bad grade at the end of a course well it wasn't the fact that you looked looked up your grade on Cal Central that caused you to get the bad grade it was something you did earlier in the class perhaps the fact that you did poorly on an exam at the time perhaps you didn't realize that this would lead you to fail the course so this is very much an issue that we have in reinforcement learning referred to as credit assignment where the decision that actually results in a bad outcome or a good outcome might not itself be labeled with a higher low reward the reward might only happen later so we need to take this data which is not labeled with crunch with optimal outputs and might involve these delayed rewards run reinforcement learning on it and hopefully get a behavior that is better than the behavior we saw before so that's really the challenge in reinforcement so let's try to make this a little bit more precise in supervised learning you have an input X and you have an output Y and you have a data set that consists of X Y pairs the goal is to learn a function that takes an X and approximates Y and typically this function has some parameters which we refer to as Theta these might be for example the weights in a neural network in reinforcement learning we have a kind of a cyclical online learning procedure where an agent interacts with the world the agent chooses actions a t at every point in time and the world responds with the resulting State s t plus 1 and reward signal and the reward signal simply indicates how good that state is but it doesn't necessarily tell you if the action that you just took was a good or bad action perhaps you got lucky and landed in a good state or perhaps you did something really good earlier that caused you to get into a good State now the input to our agent is going to be the state St at each time step so this kind of the analog of X the output is 80 at each time step the data which is collected by the agent itself classically consists of sequences of States actions and rewards rewards are numbers scalar values and whereas in supervised learning the data is given to you you don't have to worry about who gave you the data it's just provided to you as a set of XY tuples in reinforcement learning you have to pick your own actions and collect your own data so not only do you have to worry about the fact that the actions in your data set might not be the optimal actions you have to also actually decide how that data will be collected and your goal is to learn a policy Pi Theta which Maps States s to actions a and just like f Pi has parameters Theta so those might again be the weights in a neural network and a good policy is one that maximizes the cumulative total reward so not just the rewarding the point in time but the total reward the agent receives so that involves strategic reasoning maybe you might do something that might seem unrewarding now to attain higher rewards later so let's talk about some examples of how problems could be cast in the terminology of reinforcement learning let's say that you'd like to train a dog to perform some trick right so in this case the actions might be the muscle contractions of the dog's muscles the observations might be what the dog perceives through its sense of sight and smell the reward might be the treat that it gets and the dog will then learn to do whatever maximize that reward which might be the trick that you want to perform because you're rewarded with food when the dog performs the tree the trick successfully okay here's another example maybe you have a robot its actions might be the motor current or pork some kind of actuation command sent to its Motors its observations might be the readings from its sensors like camera amateurs and its reward might be some measure of test success maybe this robot needs to run as fast as possible to reach your destination so its reward function might be the running speed or it might be the whether it reached the destination or not just maybe it receives a plus one when it reaches the destination successfully and a minus one otherwise here's another problem let's say you want to manage inventory route goods between different warehouses in order to maintain stock levels perhaps the actions are which inventory to purchase the observations are the current inventory levels and the reward might be the profit you make perhaps you have to pay if you want to store inventory for a long time so your profits will be lower so you can see that this formulation is very general many different problems can be cast into the framework of reinforcement we'll of course make all this a lot more precise later so this is a very high level introduction don't worry yet if the particular details aren't very clear will make us a lot more precise in later lectures but for now let me just give you some examples of the kinds of things that reinforce some learning methods could do one of the things that reinforcement learning is very good at is learning policies for physically complex tasks tasks where it might be very difficult for a person to describe precisely how the tasks should be performed but much easier to find the reward like in this case the rewards that the nails should be hammered in and the reinforced learning algorithm figures out how to control this robotic hand to move the hammer to hammer in the nail here's another complex physical task here this quadrupedal robot needs to be able to jump over different obstacles now coding up manually a skill for jumping like this is very tough but reinforcement one can learn the actuations that will allow the robot to jump in different locations to various distances and so on it can even perform more physically complex tasks so here this in the next clip the quadrupedal robot uh this is a baseline method so don't worry about this share this quadrupedal robot needs to figure out how to stand on its hind legs and balance and that's also very difficult to do manually but with an appropriate reinforced morning method that's actually possible impact reinforcement learning has been applied very very widely to robotic problems here's a an even more recent work and this is from eth Zurich showing a robot using reinforcement with a combination of simulated optimization to learn various agile skills and you can see that it might clamor onto obstacles and things like that now the other thing that reinforced learning is great at is coming up with unexpected Solutions I alluded to this before with the alphago example here's another example which you'll actually Implement in your homeworks where a key learning algorithms learn to play Atari and discover the strategy that if you bounce the ball up over the bricks then it will bounce around you'll get lots of points reinforcement can also be applied at larger scale in the real world this is a project that was done at a company called Everyday Robots versus alphabet company where the robots learn to sort trash so the idea is that if people put trash into recyclables that should actually go into the compost then the robot can come and sort it the robots here learn in the real world in both in these classroom environments where they can practice and in like actual Office Buildings and these vision-based skills that are kind of similar in spirit to the ones that I mentioned in the beginning uh can then pick up and move objects in real world Office Buildings so that's pretty neat that you can actually practice these things you can practice them on the job you can practice them in the real world and it's of course not just for games and robots um I I really like this next example this is work that was done by Kathy Wu who's now a professor at MIT and was previously a PhD student here at UC Berkeley and what Kathy was working on his reinforcement learning algorithms for controlling traffic um this is a kind of a toy example where these cars drive in a circle and what tends to happen even in a simple circular environment like this if you have a very uh reasonable a very accurate model of what human drivers behave you'll actually get traffic jams forming spontaneously so cars will kind of bunch up and when they Bunch up like this they'll actually have spontaneously form traffic jams even though they drive in a circle so what Kathy then did is she optimized the reinforced learning policy which will be shown next for the car shown in red to not optimize its its own speed but to optimize the speed of the entire circle and you can see that what this car in red is going to do is going to actually slow down and wait for everybody to resolve the traffic jam and by going a little bit slower it'll actually avoid the formation of traffic jams in the entire circle Kathy also experimented with this in other settings this is a a figure eight kind of intersection and as you might expect cars will Bunch up at the center section and cause delays so if there's an autonomous car that is trying to optimize the uh driving speed of all the cars the autonomous car will actually slow down a little bit and regulate the traffic so that everyone passes through the intersection at exactly the perfect time now this example maybe is a little bit synthetic but there's a considerable follow-up work to this showing that in fact autonomous regulation of traffic with reinforced money can be quite a powerful tool reinforcement has also been used very widely with language models uh many of you are probably familiar with the advances in recent language models of things like chat GPT and many other systems like anthropics Claude or Google's Bard which use large amounts of data to train models that will fulfill user requests and this is an example on the right where someone asks child should be T to explain how RL with human feedback works for language models and it produces some kind of explanation now by themselves large language models train on lots of internet data can solve their sophisticated problems but it's quite difficult to persuade them to do this because these models are basically trying to complete text based on what they learned from internet data so you have to prompt them in a way that kind of indexes into the right context reinforcement learning can be used to make this a lot easier by essentially training these models based on human scores so instead of just asking them to provide the kind of completions that are most likely from internet data they can actually be trained to respond to queries in ways that human readers find to be desirable and reinforced learning is actually a very important part of this reinforcement learning has also been used with image generation here's an example with a stable diffusion 1.4 if you ask it to generate a picture of a dolphin riding a bike it actually generates a picture that is not very good for this what you can do is you can take this image and you can give it to a captioning model in this case lava to produce a description of the image and then use RL where the reward function is given by the similarity between the description from lava and the original prompt so when lava looks at this picture it might say oh this is a picture of a dolphin above the water which is not very similar to Dolphin riding a bike so that so it receives a bad reward for that if we then optimize the image generation model with RL to maximize this reward it'll gradually make the image more appropriate to The Prompt so now there's both a dolphin and a bicycle although the dolphin is not writing the bicycle just yet with a few more iterations now there's a dolphin like creature that is in fact on a bicycle and with some more iterations the creature begins much more clearly a dolphin apparently putting some waves in the background makes it extra dolphin-like and then eventually there's a full-fledged picture of a dolphin riding a bicycle so reinforced learning can be used to optimize image generation models reinforcement one can also be used for other things this is an example on chip design where the actions correspond to placement of Chip parts per layout and the reward has to do with various chip design parameters like the cost or the congestion or latency of the chip so rainforcement can actually be applied By plot so I'll pause here and in the next section I'll discuss why we should study deep RL today |
CS_285_Deep_RL_2023 | CS_285_Lecture_5_Part_4.txt | in the next portion of today's lecture we're going to talk about how we can extend policy gradients from the on policy setting into the off policy setting so the first part i want to cover is why policy gradients are considered an on-policy algorithm policy grades are the the classical example of non-policy algorithm because they require generating new samples each time you modify the policy the the reason this is an issue is if you look at the form of the policy gradients it's an expected value under p theta of tau of grad log p of tau times r of tau and it's really the fact that the expected values taken under p theta of tau is the problem the way that we calculate this expectation in policy gradients is by sampling trajectories using the latest policy but since the derivative evaluated at parameter vector theta requires samples sampled according to theta we have to throw out our samples each time we change theta which means the polygrading is a non-policy algorithm each update step requires fresh samples we can't retain data from other policies or even from our own previous policies when using policy gradients so in the reinforced algorithm we have step one which is to sample from our policy step two which is to evaluate the gradient and step three which is to take a step upgrading ascent and we really cannot skip step one so we can't use samples from past policies we can't use samples obtained from other sources like demonstrations we have to generate fresh samples from our own policy every single time now this is a bit of a problem when we want to do deep reinforcement learning because neural networks change only a little bit with each gradient step because neural networks are highly non-linear we can't take really huge gradient steps which means that in practice we usually end up taking a large number of small grading steps but each of those small gradient steps requires generating new samples by running your policy in your system which might involve actually running your policy in the real world or an expensive simulator so this can make policy gradients very costly when the cost of generating samples is high either computational cost or practical monetary cost so one policy learning can be very inefficient in this way i should of course mention that on the flip side if generating samples is very cheap then policy grading algorithms can be a great choice because they're quite simple fairly straightforward to implement and tend to work fairly well but if we do want to use off policy samples we can modify policy gradients using something called important sampling and that's what we're going to cover next so what if we don't have samples from p theta of tau what if we instead have samples from some other distribution that i'm going to call p bar of tau instead now p bar of tau could be a previous policy so you could be trying to reuse old samples that you've generated or it could even be some other distribution like for example demonstrations from a person all right so the trick that we're going to use to modify the policy gradient to accommodate this case is something called important sample important sampling is a general technique for evaluating an expectation under one distribution when you only have samples from a different distribution so here's how we can write out important sampling in general let's say that we'd like to calculate the expected value of some function f x under some distribution p of x we know that the expected value of f x is the integral over x of p of x times f of x and if we have access only to some other distribution q of x you can multiply the quantity inside the integral by q of x over q of x right because you know the q of x over q of x is just equal to 1 and you can always multiply by 1 without changing the value and now we can rearrange the these terms a little bit we can basically say that well q of x over q of x times p of x is equal to q of x times p of x over q of x right where we just shifted the the numerator from one to the other and now this can be written as an expected value under q of x so you can say this is equal to the expected value under q of x of p of x over q of x times f of x there's no approximation here this is all completely exact meaning that important sampling is unbiased of course the variance of this estimator could change but in expectation it's going to stay the same so now we're going to apply the same trick to evaluate the policy gradient where the q here is going to be p bar and the p is going to be p theta so here is what the important sample version of the policy grade of the rl objective would look like the importance version of the the important sampled version of the rl objective would be the expected value under some other distribution p bar of tau of p theta of tau divided by p bar of tau times r of tau so that's the rl objective and this is our importance weight now if we'd like to understand what the importance weight is equal to well we can use our identity that describes the trajectory distribution using the chain rule and we can substitute that in for p theta of tau and p bar of tau now we know that both p theta of tau and p bar of tau have the same initial state distribution p of s1 and the same transition probabilities p of st plus 1 given state they only differ by their policy because they both operate in the same mdp our distribution has the policy pi theta the sampling distribution is the policy pi bar so that means when we take the ratio of the trajectory distributions the initial state terms and the transition terms cancel and we're just left with a ratio of the products of the policy probabilities and this is very convenient because in general we don't know p of s1 or p of st plus one given stat but we do know the policy probabilities so this allows us to actually evaluate these important switch okay so now let's derive the policy gradient with important sampling where we're again going to use our convenient identity so let's say that we have samples from p theta of tau and we want to estimate the value of some new parameter vector theta prime the objective j theta prime will be equal to the expected value under p theta of tau of the importance weight multiplied by the reward so p theta prime of tau divided by b theta of tau times r of tau now notice that here the only part of this objective that actually depends on theta prime that depends on our new parameters is the numerator and the importance weight because now our samples are coming from a distribution from a different policy p theta of tau so that means that when i want to calculate the derivative with respect to theta prime of j theta prime all i have to worry about is this term in the numerator so this is the derivative i've just replaced the only term that depends on theta prime with its derivative and then i'm going to substitute my useful identity back in so the identity tells me that grad theta prime d theta prime of tau is equal to p theta prime of tau times grad log p theta prime of ten so i substitute that back in and i get this equation now when you look at this equation you'll probably immediately recognize it as exactly the equation that we get if we took the policy gradient and just stuck in an important swift and in fact you could derive the important sample policy gradient that way also i wanted to derive it in this other way on the slide just so that you could see the equivalence interestingly enough if you estimate this gradient locally so if you use this important sampling derivation to evaluate the gradient at theta equals theta prime then the importance weight is comes out equal to one and you recover the original policy gradient so this derivation actually gives you a different way uh to derive the same policy gradient that we had before but in the off policy setting theta prime is not equal to theta and in that case we have to fall back on our importance weights which we derived before as simply the ratio of the products of the policy probabilities and if we substitute in all three now the terms in this policy gradient the importance weights are product overall time steps of pi theta prime over pi theta the grad log pi part is a sum over all time steps of graph theta prime log pi theta prime and the reward is the sum over all time steps of the reward so we have three terms inside of our important sample off policy policy rating estimator and we just multiply those three terms together now what about causality what about the the fact that we don't need to consider the effect of current actions on past rewards well we can work those two in which case we again distribute the rewards and the importance weights into the sum over um uh or over grad log pies and we get a sum from t equals one capital t of grad uh log pi times the product of all the importance weights in the past you can think about intuitively as the probability that you would have arrived at the state using your new policy times the sum of rewards weighted by the importance weights in the future so future actions don't affect the current weight that's fine the trouble is that this uh last part uh you know this part can be uh you know problematic it can be exponentially large so in the first part it turns out that if we ignore this last part if we ignore the weights on the rewards we recover something called a policy iteration algorithm and you can actually prove that a policy iteration algorithm will still improve your policy uh it's no longer the gradient but it's it's a well-defined way to provide guaranteed improvement to your policy so don't worry about this yet we'll cover policy iteration in much more detail in a subsequent lecture for now uh just take my word at it that if you ignore the importance weights that multiply the rewards if you basically ignore this uh last term you still get a procedure that will improve your policy that is not true for this first term the sum that the product from t prime equals one to little t of the probability ratios so this first term is trouble um the reason this first term is trouble is because it's exponential in capital t right uh let's say that the importance weights are are all uh less than one that's a pretty reasonable assumption because you sampled your actions according to pi theta so your actions probably have a higher probability enter pi theta than they do under pi theta prime so you know good chance that your importance weights will be less than one if you multiply together many many numbers each of which is less than one then their product will go to zero exponentially fast and that's a really big problem it essentially means that your variance will go to infinity exponentially fast and policy gradients already have high variance and now you're going to blow up the variance even more by multiplying them by these high variance importance weights that's a really bad idea now in order to understand the role that this term plays we can rewrite our objective a little bit differently and the reason we're doing all this is because we really just want an excuse to delete that term so try to find that excuse let's write our objective a little bit differently um so here's our on policy policy gradient it's a sum over all of our samples our sum over all of our time steps of grad log pi times this reward to go times as q hat the q hat is just the sum from t prime equals t to capital t of the rewards but i'll write it as q-hat because otherwise the notation is going to get pretty hairy now the way that we sampled our sits and aits is by actually rolling out our policy in the environment but you can equivalently think of it as sampling state action pairs from the state action marginal at time step t right because when you sample entire trajectories the corresponding states in action at every time step look indistinguishable from from what you would have gotten if you sample from the state actual margin at that time step so you could write a different off policy policy gradient where instead of important sampling over entire trajectories the important sample over state action marginals so now your importance weight is the probability under theta prime of s i t comma ait divided by the probability of theta of s i t comma ait this is not by itself very useful because actually calculating the probabilities for these marginals is impossible without knowledge of the initial state distribution and the transition probabilities but writing it out in this way allows us to perform a little trick we can split up using the chain rule we can split up this marginal both in the numerator and the denominator into the product of two terms a state marginal pi theta prime of s i t and the action conditional by theta prime of ait given sit and then we could imagine what happens if we just ignore the state margins if we just ignore the ratio of the state probabilities well then we get an equation for the important sample policy gradient that is very similar to the one i have at the top of the slide only the product neglects all of the ratios except at t prime equals t so if you don't want your importance weights to be exponential and capital t you could try to ignore the ratio of the state marginal probabilities so you're still accounting for the ratio of action probabilities but ignoring the state marginal probabilities this does not in general give you the correct policy gradient however we'll see later on in the course when we discuss advanced policy gradients that ignoring the state marginal probabilities is reasonable in the sense that it gives you bounded error in the case where theta prime is not too different from theta and this simple insight is actually very important for deriving practical important sample policy gradient algorithms that don't suffer from an exponential increase in their variance right because when you multiply together importance weights over all time steps from t prime equals one to t you get an exponential increase in variance because your weight is exponentially tracked to zero but if you ignore the state marginal rate ratio then you only get the weights at the time step t which means that their variance does not grow exponentially so we'll learn later on when we discuss advanced policy gradients why ignoring this part is reasonable for now i'll just tell you that it's a reasonable choice if theta is close to theta prime meaning that if your policy is changing only a little bit foreign |
CS_285_Deep_RL_2023 | CS_285_Lecture_11_Part_5.txt | all right in the last portion of today's lecture we're going to shift gears a little bit and talk about how we can do model based reinforcement learning with images so what happens with complex image observations things like images in atari or pictures from a robot's camera performing some manipulation task well with uh the algorithms that we talked about before they all have some form of model that predicts the next state from the previous state in action and then plans over these states what is hard about doing this with images well first images have very high dimensionality which can make prediction difficult images also have a lot of redundancy so you know the different pixels in the image for the atari game are very similar to each other and that means that the state contains a lot of redundant information [Music] image-based tasks also tend to have partial observability so if you observe one frame in an atari game you might not know how fast the ball is moving in breakup for instance or in which direction so when we're dealing with images we typically deal with a pomdp model and this is the graphical model illustration for a palmdp it has a distribution of next states given previous states in actions and distributions over observations given states and typically when we're doing rl with images we know the observations and actions but we do not know the states so uh we would like to learn the transition dynamics in state space p s d plus one given s d a t but we don't even know what s is so perhaps we could separately learn p of o t given st and p of s t plus 1 given s t comma a t and that could be quite nice because p of o t given s t handles all the high dimensional stuff but it doesn't have to deal with the complexity of temporal dynamics whereas the p of s t plus one given s t a t has to deal with the dynamics but doesn't have to deal with the high dimensional stuff and maybe this separation of roles can give us some viable model based rl algorithms for image observations i'll discuss such algorithms briefly and somewhat informally but then at the end i'll also talk about how maybe some of this is not actually true maybe it is not too bad to actually learn dynamics directly on images so that'll come at the end but first let's talk about these kind of state-space models so these are sometimes referred to as latent space or latent state models in general they're state-space models so here we're going to learn two objects we're going to learn a p of ot given st basically how does our state map to an image that's the observation model and a p of st plus one given stat which is our dynamics model in our unobserved state space we will typically also need to learn a reward model p of rt given s-t-a-t because our reward depends on the state and since we don't know what the state is we don't know how the reward depends on it so we typically also add a reward note to this and learn a reward model all right so how should we train one of these things well if we had a standard fully observed model we would train it with maximum likelihood we would basically take our data set of n different transitions and for each transition we would maximize the log probability of st plus one comma i given sti and ati if we have a latent space model now we have a p of o t given st and a p of s t plus one given s t a t so we have to maximize the log probabilities of both of those and potentially also the reward model if we want to add that in if we knew the states then this would be easy then we would just add together log p phi st plus 1 given stat to log p phi o t given st the problem is that we don't know what s is so we have to use an expected log likelihood objective where the expectation is taken over the distribution over the unknown states in our training trajectories those of you that are familiar with things like hidden markov models it's basically the same idea so we would need some sort of algorithm that can compute a posterior distribution over states given our images and then estimate this expected log likelihood using states sampled from that approximate posterior so the expectation is taken with respect to p of s t comma s t plus 1 given o 1 through t and a 1 through t at every time okay so how can we actually do this well one thing we could do is we can actually learn an approximate posterior and i'm going to say this approximate posterioris parameter psi and i'm going to note the q sine and the approximate posterior will be another neural network that gives us a distribution of our st given the observations and actions seen so far and there are a few choices that you can make so we call this approximate posterior the encoder and you can learn a variety of different kinds of posteriors which one you pick will have some effect on how well your algorithm works so you could learn kind of a full smoothing posterior you could learn a neural net that gives you q psi of st comma st plus 1 given one through capital t comma a1 through capital t so this posterior gives you exactly the quantity you want it's the most powerful posterior you could ask for but it's also the hardest to learn on the other extreme you could imagine a very simple posterior that just tries to guess the current state given the current observation for example if the partial observability effects are minimal and this is the easiest posterior to train but also the worst in the sense that using it will be the furthest away from the true posterior that you want which is p of st commas t plus 1 given o 1 through t comma a 1 through t so you could ask for a full smoothing posterior or a single step encoder the full smoothing posterior is the most accurate in the sense that it most accurately represents your uncertainty about the states but it's also by far the most complicated to train the single step encoder is by far the simplest but provides the least accurate posterior in general you would want a more accurate posterior in situations that are more partially observed so if you believe that your problem is such where the state can be pretty much entirely guessed from the current observation then a single step posterior is a really good choice whereas if you have a heavily partially observed setting then you want something closer to a full smoothing posterior and there are of course a lot of in-between choices like estimating st given o1 through t comma a1 through t now in terms of how to actually train these posteriors this requires an understanding of something called variational inference which we'll cover in more detail next week i'll gloss over how to train these uh probabilistic encoders in this lecture and i'll instead focus on a very simple limiting case of the single step encoder so we're going to talk about the single step encoder and we're going to talk about a very simple special case of the single step encoder so if we were to really do this right then for every time step we would sample st from q of st given ot and st plus one from q of s t plus one given ot plus one and then using those samples maximize log p of c plus one given s t a t and log p of ot given st but a very simple special case of this if you believe that your problem is almost fully observed is to actually use a deterministic encoder so instead of outputting a distribution over st given ot we would just output a single st for our current ot the stochastic case requires variational inference which i'll discuss next week but the deterministic case is quite a bit simpler so the deterministic case can be thought of as a delta function centered at some deterministic encoding g psi of ot so that means that st is equal to g psi of ot and if we use this deterministic encoder then we can simply substitute that in everywhere where we see an s in the original objective and we can remove the expectation so now our objective is to maximize with respect to phi and psi the sum over over all of our trajectories of the sum over all of our time steps of log p g of o t plus 1 given g of o t comma a t plus log p of o t given g of o t so the second term can be thought of as a kind of auto encoder it just says that if you encode ot you should be able to reconstruct it back out again and the first term it enforces that the encoded states should obey the learn dynamics and then you could optimize both phi and psi jointly by back propagating through this whole thing if the dynamics is stochastic then you want to use something called uh the reparameterization trick to make this possible to solve with gradient descent which i'll cover next week but you could also use deterministic dynamics in this case and have a fully deterministic state space model of this sort so the short version is write down the subjective and then optimize it with back propagation and gradient descent so everything is differentiable and you could train everything with backprop all right so take a minute to think about this formulation look over the slide and think about whether everything here makes sense to you if you have a question about what's going on here it would be a very good idea to write a comment uh or or question in the comments and then we could discuss this in class but to briefly summarize we talked about how if you want to learn stochastic state-space models you need to use an expected log likelihood instead of a standard log likelihood where the expectation is taken with respect to an encoder which represents the posterior there are many ways to approximate the posterior but the absolute simplest one is to use an encoder from observations to states and make it a deterministic encoder in which case the expectation actually goes away and you can directly substitute the encoded observation in place of states in your dynamics and observation model objectives and of course the reward model would work the same way so if we had a reward model we would also add a log p of rt given g of ot in here okay so there's our state space model you can think of g of ot as as an additional virtual edge that maps from o to s and we also have the reward model so we can add that in there and then we have a latent space dynamics image reconstruction and a ligand space reward model there are many practical methods for using a stochastic encoder to model uncertainty and in practice those do work better but for simplicity of exposition if you think about this as a deterministic encoder i think that makes a lot of sense okay so how do we use this in an actual model based rl algorithm well it's actually fairly straightforward you can just substitute this directly into the model based rl version 1.5 algorithm that i discussed before you can run your base policy pi zero to collect the data set of transitions now these transitions consist of observation action and next observation tuples then you train your dynamics reward model observation model and encoder together with back propagation plan through the model to choose actions that maximize the reward execute the first planned action and observe the next resulting observation o prime uh append that transition to d and re-plan and that's your inner mpc loop and then you have your outer data collection loop wherever n steps you collect more data and retrain all of your models all right a few examples of uh actual algorithms in the literature that have used this trick so here is a an example by water at all called embed to control this paper used a stochastic encoder but otherwise the idea is fairly similar and then they used lqr to construct their plans through the state space model so here's a video uh first they're showing their state space this is for a kind of a point mass 2d navigation task where you just have to avoid those six little obstacle locations and what the what they're showing on the right is an embedding of the state space learned by their model and you can see that it kind of has a 2d decomposition that reflects the 2d structure in the task even though the observations are images here is an inverted pendulum task where they're training on images from the inverted pendulum and you can see that the state-space model has this kind of cool 3d structure reflecting the cyclical nature of the pendulum task here is the actual algorithm in action for pendulum swing up so on the right they're showing basically one-step predictions from their model and on the left they're showing the real image and you can see that it's kind of fuzzy but has some reasonable idea of what's going on here is another task which is carpool balancing so here again you can see the images on the right are a little fuzzier but they generally have a similar rough idea and here is a simple reaching task with a three-link muscular arm and uh it's trying to reach a particular goal image so you can see that it kind of reaches out and more or less goes to the right goal image all right here's a more recent paper that builds on these ideas to develop a more sophisticated state-space model so here the state-space model actually is regularized to be locally linear which makes it well suited for iterative linearization algorithms like iterative lqr and this method is tested on some robotics tasks this was actually done by a student who was an undergraduate here at berkeley at the time and here the observations of the robot are seeing are shown in the top left corner and then it's using lqr with this learned state space model to uh put the lego block on the other lego block and here's another example of a task where the robot has to use images to push the sculpt to the desired location and laura here who's uh one of the authors on this paper is in real time giving the robot rewards to supervise this reward model by hitting that button on the keyboard all right uh so uh here's here's a little bit more of an illustration the this is essentially running pi zero this is the initial random data collection from here on out the model will be trained and then will be used for testing in different positions so here are some tasks where the object starts in different locations so here you can see on the left is the encoder and decoder so this is basically evaluating the observation model you can see the observation model reconstructs the images fairly accurately and on the right is what the robot is actually doing and this is after about 20 minutes of training so these kinds of algorithms tend to be quite a bit more efficient than model-free algorithms that we discussed before okay now so far we've talked about algorithms that learn a latent state-space model they learn some sort of encoder with an embedding g of o t equals st what if we dispense with the embedding altogether and actually go back to the original recipe and model based rl but in observational space so what if we directly learn p of o t plus 1 given o t a t if we have partial observability then we probably need to use our current model so we need to make ot plus 1 also depend on old observations but as long as we do this we can actually do a pretty decent job of modeling dynamics directly in image space and there's been a fair bit of work doing this this is an example actually from a somewhat older paper now three years ago showing a robotic arm and each column shows a different action starting from the same point so you can see that for different actions the r moves left right up and down and when it contacts objects it pushes those objects these kinds of methods can work fairly well in more complex settings where learning a compact latent space is very difficult so if you have dozens of objects in the scene it's not actually clear how to build a compact state space for them but predicting directly in image space can actually work very well and then you could direct the robot to do a particular thing by for example telling it you know this particular point in the image move it to this location and then it figures out actions that lead to that outcome and you can do things like reach out and grab a stapler so here is the animation of what the model thinks is going to happen and when it actually goes and does it it reaches out puts the hand on the stapler and then pushes it to the desired location |
CS_285_Deep_RL_2023 | CS_285_Lecture_21_RL_with_Sequence_Models_Language_Models_Part_2.txt | all right so in the previous section we talked about how we can use sequence models to help handle RL with partial observability in the next section we're going to go the other way and we're going to discuss how RL can help us train better sequence models specifically for modeling language and in the third portion of the lecture we'll actually put these together and we'll have both partial absorbability and language models okay so what are language models and why should we care about them well um a language model at its basic uh level is a model that predicts next tokens you can roughly think of tokens as words although in reality they're not words they're more like a combinations of characters um it's actually a little complex as to what a token is but roughly speaking uh it's uh some granular representation of natural language typically we use Transformers for language models and the way this works is uh we take our sequence of tokens x0 X1 X2 X3 we uh at every position we have uh a little encoder that encodes discrete tokens into a continuous space along with an encoding of their uh place in the sequence um basically they're plac in the sequence and integer 0 1 2 3 4 uh and those are encoded into continuous representation which is then passed to what is called a masked self attention layer which is essentially a Transformer that uh can produce a representation at each position condition on representations at previous time steps that's what the masking refers to uh those are then transformed with some uh per position nonlinear Transformations and this uh self attention block is repeated some number of times and that's essentially what a Transformer is and then at the end at every position we read out a uh distribution over the token to predict uh which is basically just a soft Max and then we predict the next token so at the first time step we read in x0 and p 0 and then we predict we make a prediction about X1 and if we're decoding then for example we would start off with some token like the word I we decode some word like like that word is then used as the conditioning information for the second time step uh you make a prediction about the next one I like omdp solvers and there you get a decoding and at the end the model outputs an end of sequence token to indicate that it's done generating this particular uh generation so that's basically a Transformer language model now for the purpose of this course you don't really need to know anything about how uh the Transformer works so you could simplify this diagram to essentially be some kind of box which we call a Transformer that sequentially reads in tokens and predicts next tokens so at every time step is modeling the distribution P of XT given X1 through T minus1 and by repeatedly uh sampling from that distribution you end up with a sentence like I like PDP solvers notice that this model is not Marian so every token depends on all previous tokens uh of course the widely known chot GPT system uh barred clawed all these things are examples of language models and uh deep down inside what all these systems are doing is generating language token by token where you specify the tokens for the prompt and then it generates the tokens for the response Now language models are typically trained with supervised learning in the sense that you give them lots and lots of English text or text in other languages and then you have them use all of that data to predict the next token given all the previous tokens but we can also train them with RL if what we want is not to match the distribution in the data that is we don't just want them to Output the same kind of text that we saw in the training data but rather we want them to maximize some reward function and that can be extremely desirable in many settings why well for example Le um you could um use RL to get language models to satisfy human preferences to produce the kind of text that people like to see you can also use RL to get language models to learn how to use tools to learn how to call into databases or calculators you can also use it to train language models that are better at holding dialogues with humans and Achieve dialogue goals and we'll actually discuss all of these and these are all different than simply matching the training data these are all things that require RL rather than just superv learn okay but in order to be able to apply RL to language models we do have to answer some questions what is the mdp or PDP that corresponds to the language generation task an mdp is determined by States actions rewards and transition probabilities and we have to choose what these things are for our language generation task now there are some there's some obvious intuition like you know if you're generating language tokens probably your actions have something to do with language tokens and if your goal is to maximize user preferences then your rewards probably have something to do with user preferences but actually getting those details right has a few interesting design decisions so what is the reward and also what algorithm should we use right so we learn in the previous section that certain algorithms handle partial observability uh some of them are uh in previous lectures we saw some of them are good for off policy some are good for on policy so we have to make some choices so let's talk about some of those choices and we'll start with RL training of language models for what are sometimes referred to as single step problems which is the most widely used application of RL for language models that's how CH jpt for example is trained and then in the next section we'll talk about multi-step problems right so here's a basic formulation uh we have some uh prompt maybe the prompt is like what is the capital of France and the Transformer uh makes predictions now it's not actually uh predicting is uh the it's not actually predicting the tokens of the prompt but that is still part of its training data what it is predicting is the completion so it's going to predict like maybe the word Paris and during generation that gets fed in as the input at the next time step and then it predicts end of sequence so it so in most applications the language model is going to complete a sentence rather than generating something from scratch and the prefix that is provided that's the prompt and then the completion is the desired output okay so we're going to say that maybe a basic formulation is that the completion is our action so a is represented by the two tokens Paris and end of sequence and in general this could be a variable number of tokens and the prompt or prefix or context is the state s so our language model is representing P of a given s now the way it's representing It Is by a product of probabilities at every time step since it's not generating X1 X2 and X3 and X4 that's the prompt it's only generating X5 and X6 so the probability of a given s is given by the probability of X5 given X1 through 4 and the probability of X6 given X1 through 5 and I've separated X1 through 4 from X5 because X5 is really the previous time step of the action whereas X1 through 4 is the um state so Pi of a given s is essentially our policy Pi th now something to note here is that there are now two Notions of time step in this section super confusing the time step X1 you know 1 2 3 4 5 6 those are the time steps for the language generation for the Transformer as far as the RL algorithm is concerned there's only one time step You observe one state and you create one action so this is confusing because now in regular RL time step always meant the same thing now there's actually two kinds of time steps there's the language time step and then there's the RL time step and they are not necessarily the same so for RL purposes there's really only one time step here a it is is a banded problem it is a one-step uh MTP as far as language generation is concerned there are many time steps okay so now we've defined time steps we've defined actions we've defined States and we've defined our policy our policy probabilities represented by a product of the probabilities of the language time steps for all of the uh completion steps now we can Define our objective which is to maximize the expected reward under the policy just like in regular RL uh and this makes it a basic onstep ARL problem that is a bandit okay so let's start with using the simplest RL algorithms which is policy gradient so this is our objective we're going to take its gradient and we'll use exactly the formulas from the policy gradient lecture so we know that the gradient of the expected reward is the expectation of grad log Pi * R now we saw before that Pi was just a product of the probability of of all the tokens in the completion so when we take take the gradient of the log of Pi that's just the sum of the gradients of the log probabilities of all the completion tokens okay so that's pretty straightforward because these are exactly the kinds of gradients that you compute when you do a backward pass from the cross entropy loss and of course we can simp estimate this with samples so if we use a a standard reinforce estimator then the samples need to come from PI Theta so you would actually sample a completion from your U language model you would tell it what is the capital of France ask it to generate a completion it would generate Paris and of and of uh sequence and then you would evaluate the reward of that sample and use that as part of your gradient estimator you can also use an important sampled estimator where you might generate completions from some other policy and then use an importantance weight to get a grading for your current policy and the samples can come from some other policy Pi Bar Pi Bar could for example be a supervised training model the first uh estimator is a reinforced style estimator the the second one is an importance weighted estimator uh such as po the second class is a lot more popular for language models you can take a moment to think about why that is so the reason why the importance weighted estimators are much more popular for language models is that sampling from a language model takes considerable time and it would be very desirable not to have to generate a sample every single time to take a gradient step uh especially because evaluating the rewards of those samples can be expensive and we'll talk about that in a second so in reality it's often much preferred to generate samples from your language model evaluate the rewards of those samples and then take many gradient steps using important sampled estimators and then repeat okay so uh a particular algorithm let's let's take this important sample estimator let's call it grad hat as a shorthand and notice that it's a function of Theta Pi Bar and a set of samples AI the way that you could do this is you could sample a batch of completions for a particular state in reality you would have many states but I've written this out for just a single state you would sample a batch of completions you would evaluate the reward for each of them then you would set Pi Bar to be your previous policy the one that generated those samples and then you would have an inter Loop where you would sample a mini batch for and then on that mini batch you would take a gradient step using R hat and then you would repeat this K times so uh your batch might be let's say a th000 completions and then your mini batch might be 64 uh and then you would take some number of radiant steps and then every once in a while you'd go back out and uh generate more samples from your model set that to by bar and repeat so this is very much the classic U important sample policy gradient or poo style Loop and this is a very popular way to train language models with our own but one big question with this is the reward so notice that every time we generate a batch of completions from a a language model policy we have to evaluate the reward of each of those completions where do we get that because typically if we were to train on let's say question answering questions like what is the capital of France we might have a ground true data set of answers but here the policy might generate answers that are not in that data set so we need it to have a reward function and that reward function needs to be able to score any possible completion for a given question so very often when we do this we want to represent R itself as a neural network because we don't just have to uh figure out that Paris is the right answer and should get reward with 6 plus one we also have to figure out what happens when the language model says oh it's a city called Paris well that's a pretty good answer like it's correct it's maybe not as concise so maybe we give it a slightly lower reward maybe we say that oh that's a 0.9 not a 1.0 it might also say I don't know that might not actually be incorrect correct like maybe it really doesn't know but that's a worse answer so maybe we give it a Nega 0.1 or something and then if it says London well that's just bad that's should be a minus one but it's a language model so it can really say anything it might also say like oh why are you asking such a stupid question so that maybe is extremely undesirable you give that like a minus 10 to get the network to behave itself so your reward model doesn't just need to know what the right answer is it needs to also be able to understand how to assign rewards to answers that are only a little bit off or answers that are uh extremely different kind of out of scope of a question so this is a very kind of open vocabulary kind of problem so you need actually a very powerful reward model so uh what could we do uh well uh we could take all these potential answers we might sample them from some language model maybe we have a supervised training language model to get started we sample some answers and we give them to humans and we get humans to generate these numbers so uh maybe uh humans look at these answers they assign numbers to all of them that creates a data set consisting of sentences like what is the capital of France Paris what is the capital of France stupid question where the label is the number and then we take supervised learning and we train a model that basically looks at the sentence and then outputs this number and that could be a way to train a word model or Sai I'm going to use a subscript s now to denote the parameters of of this reward model but then of course the problem is how do people know these numbers how do how can people actually assign the number -10 to why is this such a stupid question maybe some people can do this maybe you can actually uh in some settings have uh a task where there are clear units of correctness maybe uh maybe perhaps it's a teaching application and the reward is uh how correctly the student uh answer the test uh or maybe it's some uh salesman application or the reward is how much revenue to make so in those cases maybe the rewards are very quantitative and people can actually label that but in cases where it's very subjective like saying that why is this such a stupid question be a minus 10 or is London should be minus one in those cases maybe it's hard for humans to assign U clear numerical values to these things what might be easier for humans is to compare two answers so if you tell a person the question was what is the capital of France and you have a and b a is Paris B is why is is such a stupid question it's pretty easy for a person to say oh I prefer a so a preference might be in some cases easier to express especially when the utility is very subjective so here's a thought can we use these kinds of preference es to uh design reward functions now reward functions have to assign a number to a particular answer preferences are a function of two answers so given s A1 a and A2 How likely is a person to prefer A1 over A2 that is a well defined probability so uh if the stateus uh what is the capital of France the actions are Paris and why is such a stupid question the preference a is the label uh we could simply model the probability that A1 is preferred over A2 and we can learn that but since what we want in the end is a reward function what we can do is um not actually train a neural network that predicts whether A1 is preferred over A2 but we can describe this probability as a function of reward and there's a choice that we have to make here so one very popular choice that is actually derived from the same mathematical foundations as maximum entropy inverse RL that we discussed in the IRL lecture is to model the probability that A1 is preferred over A2 as the exponential of the reward of A1 divided by the exponential of the reward of A1 Plus the reward of A2 so roughly speaking this means that the probability A1 is preferred over A2 is proportional to the exponential of its reward which means that if one reward is clearly better than the other then that one will definitely be preferred but if their rewards are about equal then they're about equally likely to be preferred and the reason for the exponential transformation mathematically is very similar to what we saw in the Max and IRL lectures I won't go into the math of that but that's basically the intuition so now the way that we can actually train this uh is we just maximize the likelihood of the preferences expressed by the human on these sa1 A2 tupal but where the predictor for that preference is parameterized by our RI using this ratio at the bottom of the slide and then we just take the logarithm of that we maximize the log likelihood with respect to side and that's a well- defined supervised learning problem so that's a way that we can get numerical rewards out of pairwise preferences um and and you can by the way extend this pretty easily to cases where the preference is expressed over more than two items so you can show the person four completions and get them to say which one they prefer in that case you'll have four uh values in the sum and the denominator you could also take four-way comparisons and turn them into all possible pairwise comparisons and that's also another way that you could express this so you could say well if you if you show someone A1 A2 A3 A4 and they prefer A1 then you say A1 is better than A2 A1 is better than A3 A1 is better than A4 and turn that into three pairwise comparisons that's also valid so here's an over all method that we can use with this scheme and this method was described in uh two papers fine-tuning language models from Human preferences and training language models to follow instructions with human feedback uh these basically are the foundation of instruct GPT chat GPT and so on the overall method is to first run supervised training or typically fine-tuning uh to get your initial policy Pi Theta and that that's a supervised training of a language model uh and then for each uh question in your data set for each s uh you would sample K possible answers AK from your policy and construct the data set consisting of a of tupal with a prompt SI and K possible answers ai1 through AI for that prompt then you would get humans to label uh each of those points uh each of those sa A1 through a TS to indicate which answer they prefer and then you would use that label data set to train RI and then you would update Pi Theta using RL with our size the reward uh and then uh you would repeat this process some number of times now uh typically when you do this in step five you would actually run many steps of policy optimization so in step five you wouldn't just optimize against that reward once with import sampling you would actually generate samples from PI Theta optimize generate more samples and repeat so there's there's actually uh two nested Loops here there's the outer loop where you're generating more samples and asking humans to express preferences then there is uh another loop where you're actually running this policy gradient and inside that there's another loop where you're running important sampled updates for multiple steps so that's the overall method now there are some challenges that we have to take care of first human preferences are very expensive uh because that actually involves like sending a bunch of data out to human labelers it might take days or even weeks to get responses out of course if you have a really nice uh crowdsourcing uh system perhaps uh you'll actually get answers back within hours but it's still way slower than taking gradient steps on a GPU so you want to minimize how much uh how often you send things out for human labeling so in practice most preference data typically actually comes from the initial supervised train model so even though I I wrote this as though it were a loop where you repeatedly quer more preferences in reality the first time you go to step two you label lots of preferences and then on subsequent tries you have uh significantly Less in fact if you want the poor man's of this you might actually not even have that outer loop you might just do steps 1 2 3 4 five once so human preferences are expensive you also want to take many iterations of RL including generating new samples from the policy per each iteration of preference Gathering um and that actually makes this very much like a modelbased RL method so uh since this is a one-step problem there's no Dynamics model but there is a reward model that reward model is trained much less frequently and then many RL updates are made on the same reward model in fact if you don't have that Outer Loop and you just do steps 1 2 3 4 5 only once it's actually an offline model based s all method um so what's the problem with that why should we be worried well the problem of course is what we saw before in the model based RL discussion the problem is distributional shift and in RL for language models that is sometimes referred to as over optimization which basically means that you um exploit the the reward model after a while and while the policy initially gets better later on it gets worse the other problem is the reward model needs to be very good so over optimization is often tackled with a simple modification where we simply add a penalty to our expected reward objective that penalizes the policy Pi Theta from deviating from the original supervised policy and this kale Divergence can conveniently be written uh by just adding log probabilities from the original supervised train model to the reward and then subtracting the log probabilities of the current model which is just just an entropy term so this just changes the reward function you take your reward model and then you add the log probabilities uh of your original supervised train model and subtract the log probabilities of your current model and beta is just a coefficient and typically when you do this you would use a very large reward model typically a large Transformer that is itself pre-trained to be a language model and then fine-tuned to Output the reward because the reward model needs to be very good it needs to be powerful enough that it can resist all that optimization exploitation pressure from from reinforcement learning okay so to recap an overview we can train language models with policy gradients we typically use important sampling estimators it's a banded problem for now although we will make it multi-step in the next section uh we can use a reward model which can be trained uh from Human data and typically we would actually train it with preferences rather than uh utility labels using this equation as the probability that the user prefers A1 over A2 and this be more convenient than direct supervision of reward values now all this technically ends up being a modelbased RL algorithm because we up we train the reward as essentially a model and then we optimize for many RL steps against that model um it could potentially be an offline model based RL algorithm if we actually don't get additional samples from a policy and send them out for more labeling uh there are details to take care of such as minimizing human labeling and over optimization and we should use a large reward model that is very powerful uh so that can handle that sign and the way that we address over optimization is typically by adding this little kale Divergence penalty to ensure the policy doesn't deviate Too Much from the supervised train model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.